Time | Namespace | Component | RelatedObject | Reason | Message |
---|---|---|---|---|---|
openshift-controller-manager |
controller-manager-78c5d9fccd-5xwlt |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-78c5d9fccd-5xwlt to master-0 | ||
metallb-system |
speaker-hfvls |
Scheduled |
Successfully assigned metallb-system/speaker-hfvls to master-0 | ||
openshift-apiserver |
apiserver |
apiserver-6576f6bc9d-r2fhv |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-marketplace |
redhat-marketplace-ck2g8 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-ck2g8 to master-2 | ||
openshift-console |
console-564c479f-7bglk |
Scheduled |
Successfully assigned openshift-console/console-564c479f-7bglk to master-2 | ||
openshift-marketplace |
redhat-marketplace-ssf75 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-ssf75 to master-2 | ||
openshift-marketplace |
redhat-operators-6kmvg |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-6kmvg to master-2 | ||
openshift-marketplace |
redhat-operators-cqdz4 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-cqdz4 to master-2 | ||
metallb-system |
frr-k8s-nnbg4 |
Scheduled |
Successfully assigned metallb-system/frr-k8s-nnbg4 to master-1 | ||
openshift-marketplace |
redhat-operators-jc2hh |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-jc2hh to master-2 | ||
openshift-marketplace |
redhat-operators-m2vwm |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "redhat-operators-m2vwm": pod redhat-operators-m2vwm is already assigned to node "master-2" | ||
openshift-marketplace |
redhat-operators-vqdj2 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-vqdj2 to master-2 | ||
openshift-monitoring |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-2 | ||
openshift-monitoring |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-2 | ||
openshift-console |
console-554dc689f9-rnmmd |
Scheduled |
Successfully assigned openshift-console/console-554dc689f9-rnmmd to master-0 | ||
cert-manager |
cert-manager-7d4cc89fcb-mcxbx |
Scheduled |
Successfully assigned cert-manager/cert-manager-7d4cc89fcb-mcxbx to master-0 | ||
openshift-console |
console-554dc689f9-rnmmd |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-554dc689f9-rnmmd |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openstack-operators |
watcher-operator-controller-manager-7c4579d8cf-pqbbd |
Scheduled |
Successfully assigned openstack-operators/watcher-operator-controller-manager-7c4579d8cf-pqbbd to master-1 | ||
openstack-operators |
test-operator-controller-manager-565dfd7bb9-c6fnn |
Scheduled |
Successfully assigned openstack-operators/test-operator-controller-manager-565dfd7bb9-c6fnn to master-2 | ||
openstack-operators |
telemetry-operator-controller-manager-7585684bd7-5wlc8 |
Scheduled |
Successfully assigned openstack-operators/telemetry-operator-controller-manager-7585684bd7-5wlc8 to master-0 | ||
openstack-operators |
swift-operator-controller-manager-6d4f9d7767-dj7p8 |
Scheduled |
Successfully assigned openstack-operators/swift-operator-controller-manager-6d4f9d7767-dj7p8 to master-0 | ||
openstack-operators |
rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n |
Scheduled |
Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n to master-1 | ||
openstack-operators |
placement-operator-controller-manager-569c9576c5-4zgfk |
Scheduled |
Successfully assigned openstack-operators/placement-operator-controller-manager-569c9576c5-4zgfk to master-1 | ||
cert-manager |
cert-manager-cainjector-7d9f95dbf-pxbjj |
Scheduled |
Successfully assigned cert-manager/cert-manager-cainjector-7d9f95dbf-pxbjj to master-0 | ||
openstack-operators |
ovn-operator-controller-manager-f9dd6d5b6-46wwk |
Scheduled |
Successfully assigned openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-46wwk to master-1 | ||
openstack-operators |
openstack-operator-index-nw58t |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-index-nw58t to master-0 | ||
openstack-operators |
openstack-operator-index-ff576 |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-index-ff576 to master-0 | ||
openstack-operators |
openstack-operator-controller-operator-64895cd698-tkclq |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-operator-64895cd698-tkclq to master-0 | ||
openstack-operators |
openstack-operator-controller-operator-548bfb9499-crk7m |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-operator-548bfb9499-crk7m to master-0 | ||
openstack-operators |
openstack-operator-controller-manager-6566ff98d5-wbc89 |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-manager-6566ff98d5-wbc89 to master-2 | ||
openstack-operators |
openstack-baremetal-operator-controller-manager-69958697d76f9td |
Scheduled |
Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-69958697d76f9td to master-2 | ||
cert-manager |
cert-manager-webhook-d969966f-ddrnx |
Scheduled |
Successfully assigned cert-manager/cert-manager-webhook-d969966f-ddrnx to master-0 | ||
openstack-operators |
octavia-operator-controller-manager-f456fb6cd-nb6ph |
Scheduled |
Successfully assigned openstack-operators/octavia-operator-controller-manager-f456fb6cd-nb6ph to master-0 | ||
openstack-operators |
nova-operator-controller-manager-64487ccd4d-8gqsb |
Scheduled |
Successfully assigned openstack-operators/nova-operator-controller-manager-64487ccd4d-8gqsb to master-0 | ||
openstack-operators |
neutron-operator-controller-manager-7c95684bcc-qn2dm |
Scheduled |
Successfully assigned openstack-operators/neutron-operator-controller-manager-7c95684bcc-qn2dm to master-2 | ||
openstack-operators |
mariadb-operator-controller-manager-7f4856d67b-sgjwb |
Scheduled |
Successfully assigned openstack-operators/mariadb-operator-controller-manager-7f4856d67b-sgjwb to master-0 | ||
openstack-operators |
manila-operator-controller-manager-6d78f57554-t6sj6 |
Scheduled |
Successfully assigned openstack-operators/manila-operator-controller-manager-6d78f57554-t6sj6 to master-2 | ||
openstack-operators |
keystone-operator-controller-manager-f4487c759-hdfw8 |
Scheduled |
Successfully assigned openstack-operators/keystone-operator-controller-manager-f4487c759-hdfw8 to master-1 | ||
openstack-operators |
ironic-operator-controller-manager-6b498574d4-tcqkg |
Scheduled |
Successfully assigned openstack-operators/ironic-operator-controller-manager-6b498574d4-tcqkg to master-2 | ||
openstack-operators |
infra-operator-controller-manager-d68fd5cdf-sbpvg |
Scheduled |
Successfully assigned openstack-operators/infra-operator-controller-manager-d68fd5cdf-sbpvg to master-1 | ||
openstack-operators |
horizon-operator-controller-manager-54969ff695-hgxjt |
Scheduled |
Successfully assigned openstack-operators/horizon-operator-controller-manager-54969ff695-hgxjt to master-0 | ||
openshift-monitoring |
alertmanager-main-1 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-1 to master-1 | ||
openstack-operators |
heat-operator-controller-manager-68fc865f87-c8wmp |
Scheduled |
Successfully assigned openstack-operators/heat-operator-controller-manager-68fc865f87-c8wmp to master-0 | ||
openshift-monitoring |
alertmanager-main-1 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-1 to master-0 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-c57444595-mj7cx |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openstack-operators |
glance-operator-controller-manager-59bd97c6b9-s2zqv |
Scheduled |
Successfully assigned openstack-operators/glance-operator-controller-manager-59bd97c6b9-s2zqv to master-1 | ||
openshift-console |
console-554dc689f9-c5k9h |
Scheduled |
Successfully assigned openshift-console/console-554dc689f9-c5k9h to master-2 | ||
openstack-operators |
designate-operator-controller-manager-67d84b9cc-698kz |
Scheduled |
Successfully assigned openstack-operators/designate-operator-controller-manager-67d84b9cc-698kz to master-1 | ||
openstack-operators |
cinder-operator-controller-manager-5484486656-vvnpp |
Scheduled |
Successfully assigned openstack-operators/cinder-operator-controller-manager-5484486656-vvnpp to master-1 | ||
openstack-operators |
barbican-operator-controller-manager-658c7b459c-pwzgf |
Scheduled |
Successfully assigned openstack-operators/barbican-operator-controller-manager-658c7b459c-pwzgf to master-0 | ||
openshift-ingress-canary |
ingress-canary-fkqvb |
Scheduled |
Successfully assigned openshift-ingress-canary/ingress-canary-fkqvb to master-0 | ||
openstack-operators |
32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ajskqr |
Scheduled |
Successfully assigned openstack-operators/32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ajskqr to master-2 | ||
openshift-storage |
vg-manager-zvnk6 |
Scheduled |
Successfully assigned openshift-storage/vg-manager-zvnk6 to master-2 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-c57444595-mj7cx |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-console |
console-554dc689f9-c5k9h |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-storage |
vg-manager-sr6bp |
Scheduled |
Successfully assigned openshift-storage/vg-manager-sr6bp to master-0 | ||
openshift-storage |
vg-manager-jdht5 |
Scheduled |
Successfully assigned openshift-storage/vg-manager-jdht5 to master-1 | ||
openshift-storage |
lvms-operator-54844bd599-xsrzw |
Scheduled |
Successfully assigned openshift-storage/lvms-operator-54844bd599-xsrzw to master-0 | ||
openshift-console |
console-554dc689f9-c5k9h |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver |
apiserver-6576f6bc9d-r2fhv |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-nmstate |
nmstate-handler-lkd88 |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-handler-lkd88 to master-1 | ||
openshift-route-controller-manager |
route-controller-manager-7968c6c999-b54xp |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-7968c6c999-b54xp to master-2 | ||
openshift-apiserver |
apiserver |
apiserver-6576f6bc9d-r2fhv |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver |
apiserver-6576f6bc9d-xfzjr |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-apiserver |
apiserver |
apiserver-6576f6bc9d-xfzjr |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-apiserver |
apiserver |
apiserver-6576f6bc9d-xfzjr |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-6576f6bc9d-xfzjr |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-controller-manager |
controller-manager-56cfb99cfd-9798f |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "controller-manager-56cfb99cfd-9798f": pod controller-manager-56cfb99cfd-9798f is already assigned to node "master-1" | ||
openshift-oauth-apiserver |
apiserver-7b6784d654-s9576 |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-7b6784d654-l7lmp |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver |
apiserver-5f68d4c887-pqcgn |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver |
apiserver-7b6784d654-l7lmp |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-5f68d4c887-pqcgn |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-7b6784d654-l7lmp |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-7b6784d654-l7lmp |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-7b6784d654-l7lmp |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver-7b6784d654-l7lmp |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver |
apiserver-5f68d4c887-pqcgn |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver-5f68d4c887-pqcgn |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-5f68d4c887-pqcgn to master-0 | ||
openshift-apiserver |
apiserver-5f68d4c887-pqcgn |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-5f68d4c887-pqcgn |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-5f68d4c887-j7ckh |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-5f68d4c887-j7ckh |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-7b6784d654-l7lmp |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver |
apiserver-5f68d4c887-pqcgn |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-oauth-apiserver |
apiserver-7b6784d654-g299n |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-7b6784d654-g299n to master-1 | ||
openshift-apiserver |
apiserver |
apiserver-5f68d4c887-pqcgn |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver-5f68d4c887-s2fvb |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-7b6784d654-g299n |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-7b6784d654-g299n |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-7b6784d654-8vpmp |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-7b6784d654-8vpmp to master-2 | ||
openshift-apiserver |
apiserver-5f68d4c887-s2fvb |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-5f68d4c887-s2fvb to master-2 | ||
openshift-oauth-apiserver |
apiserver-7b6784d654-s9576 |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-operators |
obo-prometheus-operator-7c8cf85677-w5k2h |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-7c8cf85677-w5k2h to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-7cb968574c-6c6cd |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7cb968574c-6c6cd to master-0 | ||
openshift-apiserver |
apiserver |
apiserver-595d5f74d8-ttb94 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver |
apiserver-595d5f74d8-ttb94 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-595d5f74d8-ttb94 |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-595d5f74d8-ttb94 |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-route-controller-manager |
route-controller-manager-7968c6c999-b54xp |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h to master-1 | ||
openshift-operators |
observability-operator-cc5f78dfc-xm62s |
Scheduled |
Successfully assigned openshift-operators/observability-operator-cc5f78dfc-xm62s to master-0 | ||
openshift-apiserver |
apiserver |
apiserver-595d5f74d8-ttb94 |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-operators |
perses-operator-54bc95c9fb-k5626 |
Scheduled |
Successfully assigned openshift-operators/perses-operator-54bc95c9fb-k5626 to master-0 | ||
openshift-oauth-apiserver |
apiserver-7b6784d654-8vpmp |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-7b6784d654-8vpmp |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-7b6784d654-27mg2 |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-7b6784d654-27mg2 to master-0 | ||
openshift-oauth-apiserver |
apiserver-7b6784d654-27mg2 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-7b6784d654-27mg2 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-nmstate |
nmstate-webhook-6cdbc54649-bj7wk |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-webhook-6cdbc54649-bj7wk to master-0 | ||
openshift-nmstate |
nmstate-handler-sbvf7 |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-handler-sbvf7 to master-0 | ||
openshift-nmstate |
nmstate-handler-g87gn |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-handler-g87gn to master-2 | ||
openshift-nmstate |
nmstate-operator-858ddd8f98-7gf7t |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-operator-858ddd8f98-7gf7t to master-0 | ||
openshift-nmstate |
nmstate-metrics-fdff9cb8d-j4j8c |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-metrics-fdff9cb8d-j4j8c to master-0 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-c57444595-mj7cx |
TerminationStoppedServing |
Server has stopped listening | |
openshift-nmstate |
nmstate-console-plugin-6b874cbd85-h8v5p |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-console-plugin-6b874cbd85-h8v5p to master-1 | ||
openshift-apiserver |
apiserver-595d5f74d8-ttb94 |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-595d5f74d8-ttb94 |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver |
apiserver-595d5f74d8-hck8v |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver |
apiserver-595d5f74d8-hck8v |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-595d5f74d8-hck8v |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-6576f6bc9d-xfzjr |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver |
apiserver-6576f6bc9d-r2fhv |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver-8644c46667-7z9ft |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver |
apiserver-595d5f74d8-hck8v |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-7b6784d654-s9576 |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver |
apiserver-96c4c446c-brl6n |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver |
apiserver-96c4c446c-brl6n |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-network-operator |
iptables-alerter-hkmb5 |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-hkmb5 to master-0 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-96c4c446c-brl6n |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-96c4c446c-brl6n |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-96c4c446c-brl6n |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver |
apiserver-96c4c446c-728v2 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-console |
console-564c479f-s9vtn |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-564c479f-s9vtn |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver |
apiserver-595d5f74d8-hck8v |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver |
apiserver-c57444595-mj7cx |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver |
apiserver-96c4c446c-728v2 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-96c4c446c-728v2 |
TerminationStoppedServing |
Server has stopped listening | |
openshift-image-registry |
node-ca-xvwmq |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-xvwmq to master-1 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-96c4c446c-728v2 |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-96c4c446c-728v2 |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-image-registry |
node-ca-8fg56 |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-8fg56 to master-2 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-84c8b8d745-wnpsp |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver |
apiserver-84c8b8d745-wnpsp |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-8644c46667-cg62m |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver |
apiserver-84c8b8d745-wnpsp |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-84c8b8d745-wnpsp |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-authentication |
oauth-openshift-55df5b4c9d-k6sz4 |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-55df5b4c9d-k6sz4 to master-2 | ||
openshift-authentication |
oauth-openshift-55df5b4c9d-wpbsb |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-55df5b4c9d-wpbsb |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-c57444595-zs4m8 |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-network-node-identity |
network-node-identity-lfw7t |
Scheduled |
Successfully assigned openshift-network-node-identity/network-node-identity-lfw7t to master-0 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-c57444595-zs4m8 |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-c57444595-zs4m8 |
TerminationStoppedServing |
Server has stopped listening | |
openshift-monitoring |
metrics-server-76c4979bdc-gds6w |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-65687bc9c8-h4cd4 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
metallb-system |
controller-68d546b9d8-9strj |
Scheduled |
Successfully assigned metallb-system/controller-68d546b9d8-9strj to master-0 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-c57444595-zs4m8 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-image-registry |
node-ca-4wx2z |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-4wx2z to master-0 | ||
openshift-monitoring |
metrics-server-76c4979bdc-gds6w |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-c57444595-zs4m8 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver |
apiserver-8644c46667-cg62m |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-console |
console-564c479f-s9vtn |
Scheduled |
Successfully assigned openshift-console/console-564c479f-s9vtn to master-0 | ||
openshift-marketplace |
redhat-marketplace-9llqb |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-9llqb to master-2 | ||
openshift-marketplace |
redhat-marketplace-7dljg |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-7dljg to master-2 | ||
openshift-marketplace |
redhat-marketplace-2p79c |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "redhat-marketplace-2p79c": pod redhat-marketplace-2p79c is already assigned to node "master-2" | ||
openshift-authentication |
oauth-openshift-65687bc9c8-h4cd4 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
metallb-system |
frr-k8s-2pxml |
Scheduled |
Successfully assigned metallb-system/frr-k8s-2pxml to master-2 | ||
openshift-apiserver |
apiserver |
apiserver-8644c46667-cg62m |
TerminationStoppedServing |
Server has stopped listening | |
openshift-monitoring |
metrics-server-76c4979bdc-gds6w |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-76c4979bdc-gds6w to master-2 | ||
openshift-monitoring |
metrics-server-76c4979bdc-mgff4 |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-monitoring |
metrics-server-76c4979bdc-mgff4 |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-monitoring |
metrics-server-76c4979bdc-mgff4 |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-monitoring |
metrics-server-76c4979bdc-mgff4 |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-monitoring |
metrics-server-76c4979bdc-mgff4 |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-76c4979bdc-mgff4 to master-0 | ||
openshift-monitoring |
monitoring-plugin-75bcf9f5fd-5f2qh |
Scheduled |
Successfully assigned openshift-monitoring/monitoring-plugin-75bcf9f5fd-5f2qh to master-2 | ||
openshift-apiserver |
apiserver |
apiserver-8644c46667-cg62m |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver-65499f9774-hhfd6 |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-65499f9774-hhfd6 to master-1 | ||
openshift-console |
console-5958979c8-mpc88 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-dns |
dns-default-6qp6p |
Scheduled |
Successfully assigned openshift-dns/dns-default-6qp6p to master-0 | ||
openshift-dns |
node-resolver-544bd |
Scheduled |
Successfully assigned openshift-dns/node-resolver-544bd to master-0 | ||
openshift-apiserver |
apiserver-65499f9774-hhfd6 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-65687bc9c8-h4cd4 |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-65687bc9c8-h4cd4 to master-1 | ||
openshift-console |
console-5958979c8-mpc88 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-65499f9774-hhfd6 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-5958979c8-mpc88 |
Scheduled |
Successfully assigned openshift-console/console-5958979c8-mpc88 to master-0 | ||
openshift-marketplace |
fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cqvvdb |
Scheduled |
Successfully assigned openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cqvvdb to master-2 | ||
openshift-route-controller-manager |
route-controller-manager-7968c6c999-vcjcn |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-7968c6c999-vcjcn to master-1 | ||
openshift-route-controller-manager |
route-controller-manager-7968c6c999-vcjcn |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-7968c6c999-tlxv6 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-7968c6c999-tlxv6 to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-7968c6c999-tlxv6 |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-7968c6c999-tlxv6 |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-c57444595-mj7cx |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-marketplace |
redhat-operators-hh4tw |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-hh4tw to master-2 | ||
openshift-authentication |
oauth-openshift-65687bc9c8-twgxt |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-77674cffc8-k5fvv |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-77674cffc8-k5fvv to master-1 | ||
openshift-route-controller-manager |
route-controller-manager-77674cffc8-gf5tz |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-77674cffc8-gf5tz to master-2 | ||
openshift-route-controller-manager |
route-controller-manager-77674cffc8-gf5tz |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-76f4d8cd68-t98ml |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-76f4d8cd68-t98ml to master-2 | ||
openshift-route-controller-manager |
route-controller-manager-76f4d8cd68-bzmnd |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-76f4d8cd68-bzmnd |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-5958979c8-p9l2s |
Scheduled |
Successfully assigned openshift-console/console-5958979c8-p9l2s to master-1 | ||
openshift-marketplace |
community-operators-thpzb |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-thpzb to master-2 | ||
openshift-marketplace |
community-operators-jdhvr |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-jdhvr to master-2 | ||
openshift-marketplace |
community-operators-hnrf8 |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-hnrf8 to master-2 | ||
openshift-marketplace |
community-operators-g6dqm |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-g6dqm to master-2 | ||
openshift-console |
console-668956f9dd-llkhv |
Scheduled |
Successfully assigned openshift-console/console-668956f9dd-llkhv to master-2 | ||
openshift-marketplace |
community-operators-7flhc |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "community-operators-7flhc": pod community-operators-7flhc is already assigned to node "master-2" | ||
openshift-marketplace |
community-operators-5zr4r |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-5zr4r to master-2 | ||
openshift-marketplace |
certified-operators-w9zcz |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-w9zcz to master-2 | ||
openshift-ovn-kubernetes |
ovnkube-node-xsrn9 |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-xsrn9 to master-0 | ||
openshift-marketplace |
certified-operators-r4wbf |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-r4wbf to master-2 | ||
openshift-marketplace |
certified-operators-qljc7 |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-qljc7 to master-2 | ||
openshift-console |
console-668956f9dd-mlrd8 |
Scheduled |
Successfully assigned openshift-console/console-668956f9dd-mlrd8 to master-1 | ||
openshift-marketplace |
certified-operators-75z82 |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-75z82 to master-2 | ||
openshift-marketplace |
certified-operators-629l7 |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "certified-operators-629l7": pod certified-operators-629l7 is already assigned to node "master-2" | ||
openshift-marketplace |
a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d4wlwn |
Scheduled |
Successfully assigned openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d4wlwn to master-2 | ||
openshift-marketplace |
8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ppm5m |
Scheduled |
Successfully assigned openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ppm5m to master-2 | ||
openshift-marketplace |
695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb699lfcx |
Scheduled |
Successfully assigned openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb699lfcx to master-2 | ||
metallb-system |
frr-k8s-qqrwm |
Scheduled |
Successfully assigned metallb-system/frr-k8s-qqrwm to master-0 | ||
openshift-marketplace |
4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b725qlb |
Scheduled |
Successfully assigned openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b725qlb to master-2 | ||
openshift-console |
console-77d8f866f9-8jlq8 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-77d8f866f9-8jlq8 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-77d8f866f9-8jlq8 |
Scheduled |
Successfully assigned openshift-console/console-77d8f866f9-8jlq8 to master-0 | ||
openshift-authentication |
oauth-openshift-65687bc9c8-twgxt |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver |
apiserver-8644c46667-cg62m |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-console |
console-77d8f866f9-skvf6 |
Scheduled |
Successfully assigned openshift-console/console-77d8f866f9-skvf6 to master-1 | ||
openshift-monitoring |
monitoring-plugin-75bcf9f5fd-xkw2l |
Scheduled |
Successfully assigned openshift-monitoring/monitoring-plugin-75bcf9f5fd-xkw2l to master-1 | ||
openshift-monitoring |
node-exporter-gww4z |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-gww4z to master-0 | ||
openshift-monitoring |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-2 | ||
openshift-monitoring |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-2 | ||
openshift-monitoring |
prometheus-k8s-1 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-1 to master-1 | ||
openshift-cluster-node-tuning-operator |
tuned-vvmcs |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-vvmcs to master-0 | ||
openshift-machine-config-operator |
machine-config-server-xcgtf |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-server-xcgtf to master-0 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-84c8b8d745-wnpsp |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver-84c8b8d745-wnpsp |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-84c8b8d745-wnpsp to master-0 | ||
openshift-oauth-apiserver |
apiserver-84c8b8d745-wnpsp |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-65687bc9c8-twgxt |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-65687bc9c8-twgxt to master-2 | ||
openshift-oauth-apiserver |
apiserver-84c8b8d745-wnpsp |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-56cfb99cfd-rq5ck |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-56cfb99cfd-rq5ck to master-2 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-84c8b8d745-p4css |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver |
apiserver-84c8b8d745-p4css |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-84c8b8d745-p4css |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver-65499f9774-d4zpq |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-65499f9774-d4zpq to master-0 | ||
openshift-apiserver |
apiserver-65499f9774-d4zpq |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
metallb-system |
frr-k8s-webhook-server-64bf5d555-sgx9c |
Scheduled |
Successfully assigned metallb-system/frr-k8s-webhook-server-64bf5d555-sgx9c to master-0 | ||
openshift-apiserver |
apiserver-65499f9774-d4zpq |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-84c8b8d745-p4css |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-network-diagnostics |
network-check-target-vmk66 |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-vmk66 to master-0 | ||
openshift-authentication |
oauth-openshift-65687bc9c8-w9j4s |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-84c8b8d745-p4css |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-operator-lifecycle-manager |
collect-profiles-29340795-t5kx5 |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "collect-profiles-29340795-t5kx5": pod collect-profiles-29340795-t5kx5 is already assigned to node "master-2" | ||
openshift-monitoring |
prometheus-k8s-1 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-1 to master-0 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29340810-2nzff |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29340810-2nzff to master-1 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29340825-szpzv |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29340825-szpzv to master-1 | ||
openshift-marketplace |
redhat-marketplace-cfl42 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-cfl42 to master-2 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29340840-w6v9t |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29340840-w6v9t to master-1 | ||
metallb-system |
metallb-operator-controller-manager-6479dd8558-s545w |
Scheduled |
Successfully assigned metallb-system/metallb-operator-controller-manager-6479dd8558-s545w to master-0 | ||
openshift-apiserver |
apiserver-65499f9774-b84hw |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-65499f9774-b84hw to master-2 | ||
openshift-console |
downloads-65bb9777fc-bm4pw |
Scheduled |
Successfully assigned openshift-console/downloads-65bb9777fc-bm4pw to master-1 | ||
openshift-apiserver |
apiserver-65499f9774-b84hw |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
downloads-65bb9777fc-sd822 |
Scheduled |
Successfully assigned openshift-console/downloads-65bb9777fc-sd822 to master-2 | ||
openshift-monitoring |
thanos-querier-cc99494f6-ds5gd |
Scheduled |
Successfully assigned openshift-monitoring/thanos-querier-cc99494f6-ds5gd to master-1 | ||
openshift-monitoring |
thanos-querier-cc99494f6-kmmxc |
Scheduled |
Successfully assigned openshift-monitoring/thanos-querier-cc99494f6-kmmxc to master-2 | ||
openshift-multus |
multus-additional-cni-plugins-dzmh2 |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-dzmh2 to master-0 | ||
metallb-system |
metallb-operator-webhook-server-6d98fdfb58-5gp8d |
Scheduled |
Successfully assigned metallb-system/metallb-operator-webhook-server-6d98fdfb58-5gp8d to master-0 | ||
openshift-multus |
multus-admission-controller-6bc7c56dc6-4dpkm |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-6bc7c56dc6-4dpkm to master-1 | ||
openshift-multus |
multus-admission-controller-6bc7c56dc6-n46rr |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-6bc7c56dc6-n46rr to master-2 | ||
openshift-multus |
multus-bvr92 |
Scheduled |
Successfully assigned openshift-multus/multus-bvr92 to master-0 | ||
openshift-apiserver |
apiserver-595d5f74d8-hck8v |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "apiserver-595d5f74d8-hck8v": pod apiserver-595d5f74d8-hck8v is already assigned to node "master-1" | ||
openshift-apiserver |
apiserver-65499f9774-b84hw |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-84c8b8d745-p4css |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-84c8b8d745-p4css to master-2 | ||
openshift-console-operator |
console-operator-6768b5f5f9-6l8p6 |
Scheduled |
Successfully assigned openshift-console-operator/console-operator-6768b5f5f9-6l8p6 to master-2 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-84c8b8d745-j8fqz |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver |
apiserver-84c8b8d745-j8fqz |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-84c8b8d745-j8fqz |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-84c8b8d745-j8fqz |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-84c8b8d745-j8fqz |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-apiserver |
apiserver |
apiserver-5f68d4c887-s2fvb |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver |
apiserver-5f68d4c887-s2fvb |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-controller-manager |
controller-manager-86659fd8d-zhj4d |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-86659fd8d-zhj4d to master-1 | ||
metallb-system |
speaker-7mkjj |
Scheduled |
Successfully assigned metallb-system/speaker-7mkjj to master-1 | ||
openshift-apiserver |
apiserver |
apiserver-5f68d4c887-s2fvb |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-5f68d4c887-s2fvb |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-oauth-apiserver |
apiserver-84c8b8d745-j8fqz |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-84c8b8d745-j8fqz to master-1 | ||
openshift-controller-manager |
controller-manager-78c5d9fccd-pr9sv |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-78c5d9fccd-pr9sv to master-2 | ||
openshift-apiserver |
apiserver |
apiserver-5f68d4c887-s2fvb |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-controller-manager |
controller-manager-78c5d9fccd-pr9sv |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-84c8b8d745-j8fqz |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-65687bc9c8-w9j4s |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-78c5d9fccd-5xwlt |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver |
apiserver-6576f6bc9d-r2fhv |
TerminationStoppedServing |
Server has stopped listening | |
openshift-controller-manager |
controller-manager-78c5d9fccd-5xwlt |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-84c8b8d745-j8fqz |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-78c5d9fccd-2lzk5 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-78c5d9fccd-2lzk5 to master-1 | ||
openshift-machine-config-operator |
machine-config-daemon-7q9jd |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-7q9jd to master-0 | ||
openshift-controller-manager |
controller-manager-78c5d9fccd-2lzk5 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-7b6784d654-s9576 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver |
apiserver-7b6784d654-s9576 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-7b6784d654-s9576 |
TerminationStoppedServing |
Server has stopped listening | |
openshift-controller-manager |
controller-manager-66975b7c4d-kl7k6 |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
metallb-system |
speaker-kp26f |
Scheduled |
Successfully assigned metallb-system/speaker-kp26f to master-2 | ||
openshift-controller-manager |
controller-manager-66975b7c4d-kl7k6 |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-66975b7c4d-j962d |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-66975b7c4d-j962d to master-2 | ||
openshift-authentication |
oauth-openshift-65687bc9c8-w9j4s |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-65687bc9c8-w9j4s to master-0 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-7b6784d654-s9576 |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-controller-manager |
controller-manager-56cfb99cfd-9798f |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-multus |
network-metrics-daemon-p5vjv |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-p5vjv to master-0 | ||
openshift-network-console |
networking-console-plugin-85df6bdd68-2dd2d |
Scheduled |
Successfully assigned openshift-network-console/networking-console-plugin-85df6bdd68-2dd2d to master-2 | ||
openshift-network-console |
networking-console-plugin-85df6bdd68-f5bnc |
Scheduled |
Successfully assigned openshift-network-console/networking-console-plugin-85df6bdd68-f5bnc to master-1 | ||
openshift-authentication |
oauth-openshift-6ddc4f49f9-thnnf |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-6ddc4f49f9-thnnf to master-1 | ||
openshift-authentication |
oauth-openshift-6ddc4f49f9-qzlvm |
FailedScheduling |
skip schedule deleting pod: openshift-authentication/oauth-openshift-6ddc4f49f9-qzlvm | ||
openshift-authentication |
oauth-openshift-6ddc4f49f9-qzlvm |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_711eb90d-29d9-41ad-8df1-ed4a03f4d5be became leader | |
kube-system |
cluster-policy-controller |
bootstrap-kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster) | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_a4443cc3-d019-4c78-8da4-2f2e643d7146 became leader | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_40944f82-eda6-4a71-a6db-4bf5ddf9c92a became leader | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-node-lease namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for default namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-version namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-etcd namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-system namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-public namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for assisted-installer namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-credential-operator namespace | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_fadae3bd-a5b6-425e-9325-dfcb2bec1df3 became leader | |
assisted-installer |
job-controller |
assisted-installer-controller |
SuccessfulCreate |
Created pod: assisted-installer-controller-mzrkb | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_5b786bf6-2a1a-49ae-aba9-838ad5c6fb6f became leader | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-operator namespace | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_da596f21-ee28-40d1-855a-f896c0eaa3f1 became leader | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-55ccd5d5cf to 1 | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_ed2630f1-04f0-4aa6-921b-5cc87053fa09 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.25" image="quay.io/openshift-release-dev/ocp-release@sha256:ba6f0f2eca65cd386a5109ddbbdb3bab9bb9801e32de56ef34f80e634a7787be" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.25" image="quay.io/openshift-release-dev/ocp-release@sha256:ba6f0f2eca65cd386a5109ddbbdb3bab9bb9801e32de56ef34f80e634a7787be" | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.25" image="quay.io/openshift-release-dev/ocp-release@sha256:ba6f0f2eca65cd386a5109ddbbdb3bab9bb9801e32de56ef34f80e634a7787be" architecture="amd64" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-storage-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-network-config-controller namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-etcd-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-insights namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-node-tuning-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-machine-approver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-csi-drivers namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-authentication-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-marketplace namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-image-registry namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-machine-config-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-samples-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-dns-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-openstack-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kni-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operator-lifecycle-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-olm-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ovirt-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operators namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-vsphere-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-nutanix-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-platform-infra namespace | |
openshift-service-ca-operator |
deployment-controller |
service-ca-operator |
ScalingReplicaSet |
Scaled up replica set service-ca-operator-568c655666 to 1 | |
openshift-network-operator |
deployment-controller |
network-operator |
ScalingReplicaSet |
Scaled up replica set network-operator-854f54f8c9 to 1 | |
openshift-cluster-olm-operator |
deployment-controller |
cluster-olm-operator |
ScalingReplicaSet |
Scaled up replica set cluster-olm-operator-77b56b6f4f to 1 | |
openshift-kube-scheduler-operator |
deployment-controller |
openshift-kube-scheduler-operator |
ScalingReplicaSet |
Scaled up replica set openshift-kube-scheduler-operator-766d6b44f6 to 1 | |
openshift-kube-controller-manager-operator |
deployment-controller |
kube-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set kube-controller-manager-operator-5d85974df9 to 1 | |
openshift-apiserver-operator |
deployment-controller |
openshift-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set openshift-apiserver-operator-7d88655794 to 1 | |
openshift-etcd-operator |
deployment-controller |
etcd-operator |
ScalingReplicaSet |
Scaled up replica set etcd-operator-6bddf7d79 to 1 | |
openshift-dns-operator |
deployment-controller |
dns-operator |
ScalingReplicaSet |
Scaled up replica set dns-operator-7769d9677 to 1 | |
openshift-controller-manager-operator |
deployment-controller |
openshift-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set openshift-controller-manager-operator-5745565d84 to 1 | |
(x2) | openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found |
openshift-kube-storage-version-migrator-operator |
deployment-controller |
kube-storage-version-migrator-operator |
ScalingReplicaSet |
Scaled up replica set kube-storage-version-migrator-operator-dcfdffd74 to 1 | |
openshift-authentication-operator |
deployment-controller |
authentication-operator |
ScalingReplicaSet |
Scaled up replica set authentication-operator-66df44bc95 to 1 | |
openshift-marketplace |
deployment-controller |
marketplace-operator |
ScalingReplicaSet |
Scaled up replica set marketplace-operator-c4f798dd4 to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-user-workload-monitoring namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-monitoring namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-machine-api namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config-managed namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config namespace | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller-operator |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-operator-7ff96dd767 to 1 | |
(x14) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-55ccd5d5cf |
FailedCreate |
Error creating: pods "cluster-version-operator-55ccd5d5cf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-node-tuning-operator |
deployment-controller |
cluster-node-tuning-operator |
ScalingReplicaSet |
Scaled up replica set cluster-node-tuning-operator-7866c9bdf4 to 1 | |
openshift-monitoring |
deployment-controller |
cluster-monitoring-operator |
ScalingReplicaSet |
Scaled up replica set cluster-monitoring-operator-5b5dd85dcc to 1 | |
openshift-kube-apiserver-operator |
deployment-controller |
kube-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set kube-apiserver-operator-68f5d95b74 to 1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
package-server-manager |
ScalingReplicaSet |
Scaled up replica set package-server-manager-798cc87f55 to 1 | |
openshift-ingress-operator |
deployment-controller |
ingress-operator |
ScalingReplicaSet |
Scaled up replica set ingress-operator-766ddf4575 to 1 | |
openshift-config-operator |
deployment-controller |
openshift-config-operator |
ScalingReplicaSet |
Scaled up replica set openshift-config-operator-55957b47d5 to 1 | |
openshift-image-registry |
deployment-controller |
cluster-image-registry-operator |
ScalingReplicaSet |
Scaled up replica set cluster-image-registry-operator-6b8674d7ff to 1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
olm-operator |
ScalingReplicaSet |
Scaled up replica set olm-operator-867f8475d9 to 1 | |
openshift-machine-api |
deployment-controller |
cluster-baremetal-operator |
ScalingReplicaSet |
Scaled up replica set cluster-baremetal-operator-6c8fbf4498 to 1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
catalog-operator |
ScalingReplicaSet |
Scaled up replica set catalog-operator-f966fb6f8 to 1 | |
openshift-machine-config-operator |
deployment-controller |
machine-config-operator |
ScalingReplicaSet |
Scaled up replica set machine-config-operator-7b75469658 to 1 | |
openshift-cluster-storage-operator |
deployment-controller |
cluster-storage-operator |
ScalingReplicaSet |
Scaled up replica set cluster-storage-operator-56d4b95494 to 1 | |
openshift-insights |
deployment-controller |
insights-operator |
ScalingReplicaSet |
Scaled up replica set insights-operator-7dcf5bd85b to 1 | |
openshift-machine-api |
deployment-controller |
machine-api-operator |
ScalingReplicaSet |
Scaled up replica set machine-api-operator-9dbb96f7 to 1 | |
(x13) | openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-7ff96dd767 |
FailedCreate |
Error creating: pods "csi-snapshot-controller-operator-7ff96dd767-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-7866c9bdf4 |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-7866c9bdf4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-5b5dd85dcc |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-5b5dd85dcc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-68f5d95b74 |
FailedCreate |
Error creating: pods "kube-apiserver-operator-68f5d95b74-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-798cc87f55 |
FailedCreate |
Error creating: pods "package-server-manager-798cc87f55-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-6b8674d7ff |
FailedCreate |
Error creating: pods "cluster-image-registry-operator-6b8674d7ff-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-ingress-operator |
replicaset-controller |
ingress-operator-766ddf4575 |
FailedCreate |
Error creating: pods "ingress-operator-766ddf4575-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled down replica set cluster-version-operator-55ccd5d5cf to 0 from 1 | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-55bd67947c to 1 | |
(x13) | openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-867f8475d9 |
FailedCreate |
Error creating: pods "olm-operator-867f8475d9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-config-operator |
replicaset-controller |
openshift-config-operator-55957b47d5 |
FailedCreate |
Error creating: pods "openshift-config-operator-55957b47d5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-f966fb6f8 |
FailedCreate |
Error creating: pods "catalog-operator-f966fb6f8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-machine-config-operator |
replicaset-controller |
machine-config-operator-7b75469658 |
FailedCreate |
Error creating: pods "machine-config-operator-7b75469658-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-6c8fbf4498 |
FailedCreate |
Error creating: pods "cluster-baremetal-operator-6c8fbf4498-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-machine-api |
deployment-controller |
control-plane-machine-set-operator |
ScalingReplicaSet |
Scaled up replica set control-plane-machine-set-operator-84f9cbd5d9 to 1 | |
(x13) | openshift-machine-api |
replicaset-controller |
machine-api-operator-9dbb96f7 |
FailedCreate |
Error creating: pods "machine-api-operator-9dbb96f7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-cluster-storage-operator |
replicaset-controller |
cluster-storage-operator-56d4b95494 |
FailedCreate |
Error creating: pods "cluster-storage-operator-56d4b95494-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-597b8f6cd6 to 1 | |
(x14) | openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-568c655666 |
FailedCreate |
Error creating: pods "service-ca-operator-568c655666-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-insights |
replicaset-controller |
insights-operator-7dcf5bd85b |
FailedCreate |
Error creating: pods "insights-operator-7dcf5bd85b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x14) | openshift-network-operator |
replicaset-controller |
network-operator-854f54f8c9 |
FailedCreate |
Error creating: pods "network-operator-854f54f8c9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x14) | openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-766d6b44f6 |
FailedCreate |
Error creating: pods "openshift-kube-scheduler-operator-766d6b44f6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x14) | openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-5d85974df9 |
FailedCreate |
Error creating: pods "kube-controller-manager-operator-5d85974df9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x14) | openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-7d88655794 |
FailedCreate |
Error creating: pods "openshift-apiserver-operator-7d88655794-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x14) | openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-77b56b6f4f |
FailedCreate |
Error creating: pods "cluster-olm-operator-77b56b6f4f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x14) | openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-5745565d84 |
FailedCreate |
Error creating: pods "openshift-controller-manager-operator-5745565d84-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x14) | openshift-dns-operator |
replicaset-controller |
dns-operator-7769d9677 |
FailedCreate |
Error creating: pods "dns-operator-7769d9677-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x14) | openshift-etcd-operator |
replicaset-controller |
etcd-operator-6bddf7d79 |
FailedCreate |
Error creating: pods "etcd-operator-6bddf7d79-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cloud-credential-operator |
deployment-controller |
cloud-credential-operator |
ScalingReplicaSet |
Scaled up replica set cloud-credential-operator-5cf49b6487 to 1 | |
(x14) | openshift-authentication-operator |
replicaset-controller |
authentication-operator-66df44bc95 |
FailedCreate |
Error creating: pods "authentication-operator-66df44bc95-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x14) | openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-dcfdffd74 |
FailedCreate |
Error creating: pods "kube-storage-version-migrator-operator-dcfdffd74-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x14) | openshift-marketplace |
replicaset-controller |
marketplace-operator-c4f798dd4 |
FailedCreate |
Error creating: pods "marketplace-operator-c4f798dd4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-machine-api |
deployment-controller |
cluster-autoscaler-operator |
ScalingReplicaSet |
Scaled up replica set cluster-autoscaler-operator-7ff449c7c5 to 1 | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-6d4bdff5b8 to 1 | |
(x12) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-55bd67947c |
FailedCreate |
Error creating: pods "cluster-version-operator-55bd67947c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x11) | openshift-cloud-credential-operator |
replicaset-controller |
cloud-credential-operator-5cf49b6487 |
FailedCreate |
Error creating: pods "cloud-credential-operator-5cf49b6487-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x12) | openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-84f9cbd5d9 |
FailedCreate |
Error creating: pods "control-plane-machine-set-operator-84f9cbd5d9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x11) | openshift-machine-api |
replicaset-controller |
cluster-autoscaler-operator-7ff449c7c5 |
FailedCreate |
Error creating: pods "cluster-autoscaler-operator-7ff449c7c5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x12) | openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-597b8f6cd6 |
FailedCreate |
Error creating: pods "machine-approver-597b8f6cd6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | assisted-installer |
default-scheduler |
assisted-installer-controller-mzrkb |
FailedScheduling |
no nodes available to schedule pods |
(x11) | openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-6d4bdff5b8 |
FailedCreate |
Error creating: pods "cluster-cloud-controller-manager-operator-6d4bdff5b8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cloud-credential-operator |
default-scheduler |
cloud-credential-operator-5cf49b6487-4cf2d |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cloud-credential-operator |
replicaset-controller |
cloud-credential-operator-5cf49b6487 |
SuccessfulCreate |
Created pod: cloud-credential-operator-5cf49b6487-4cf2d | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-operator-7ff96dd767-9htmf |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-7ff96dd767 |
SuccessfulCreate |
Created pod: csi-snapshot-controller-operator-7ff96dd767-9htmf | |
openshift-machine-api |
default-scheduler |
cluster-autoscaler-operator-7ff449c7c5-nmpfk |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
default |
node-controller |
master-1 |
RegisteredNode |
Node master-1 event: Registered Node master-1 in Controller | |
openshift-machine-api |
replicaset-controller |
cluster-autoscaler-operator-7ff449c7c5 |
SuccessfulCreate |
Created pod: cluster-autoscaler-operator-7ff449c7c5-nmpfk | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-7866c9bdf4-d4dlj |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-5b5dd85dcc |
SuccessfulCreate |
Created pod: cluster-monitoring-operator-5b5dd85dcc-cxtgh | |
default |
node-controller |
master-2 |
RegisteredNode |
Node master-2 event: Registered Node master-2 in Controller | |
openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-7866c9bdf4 |
SuccessfulCreate |
Created pod: cluster-node-tuning-operator-7866c9bdf4-d4dlj | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-5b5dd85dcc-cxtgh |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-kube-apiserver-operator |
default-scheduler |
kube-apiserver-operator-68f5d95b74-bqdtw |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-68f5d95b74 |
SuccessfulCreate |
Created pod: kube-apiserver-operator-68f5d95b74-bqdtw | |
openshift-ingress-operator |
default-scheduler |
ingress-operator-766ddf4575-xhdjt |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-798cc87f55 |
SuccessfulCreate |
Created pod: package-server-manager-798cc87f55-j2bjv | |
openshift-cloud-controller-manager-operator |
default-scheduler |
cluster-cloud-controller-manager-operator-6d4bdff5b8-xzw9t |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-xzw9t to master-2 | |
openshift-ingress-operator |
replicaset-controller |
ingress-operator-766ddf4575 |
SuccessfulCreate |
Created pod: ingress-operator-766ddf4575-xhdjt | |
openshift-operator-lifecycle-manager |
default-scheduler |
package-server-manager-798cc87f55-j2bjv |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-6d4bdff5b8 |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-6d4bdff5b8-xzw9t | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-55bd67947c |
SuccessfulCreate |
Created pod: cluster-version-operator-55bd67947c-872k9 | |
openshift-cluster-version |
default-scheduler |
cluster-version-operator-55bd67947c-872k9 |
Scheduled |
Successfully assigned openshift-cluster-version/cluster-version-operator-55bd67947c-872k9 to master-2 | |
openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-867f8475d9 |
SuccessfulCreate |
Created pod: olm-operator-867f8475d9-fl56c | |
openshift-image-registry |
default-scheduler |
cluster-image-registry-operator-6b8674d7ff-gspqw |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-6b8674d7ff |
SuccessfulCreate |
Created pod: cluster-image-registry-operator-6b8674d7ff-gspqw | |
openshift-operator-lifecycle-manager |
default-scheduler |
olm-operator-867f8475d9-fl56c |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-config-operator |
replicaset-controller |
openshift-config-operator-55957b47d5 |
SuccessfulCreate |
Created pod: openshift-config-operator-55957b47d5-vtkr6 | |
openshift-machine-api |
default-scheduler |
control-plane-machine-set-operator-84f9cbd5d9-n87md |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-6c8fbf4498 |
SuccessfulCreate |
Created pod: cluster-baremetal-operator-6c8fbf4498-kcckh | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-operator-7b75469658 |
SuccessfulCreate |
Created pod: machine-config-operator-7b75469658-j2dbc | |
openshift-operator-lifecycle-manager |
default-scheduler |
catalog-operator-f966fb6f8-dwwm2 |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-machine-api |
default-scheduler |
cluster-baremetal-operator-6c8fbf4498-kcckh |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-84f9cbd5d9 |
SuccessfulCreate |
Created pod: control-plane-machine-set-operator-84f9cbd5d9-n87md | |
openshift-config-operator |
default-scheduler |
openshift-config-operator-55957b47d5-vtkr6 |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-machine-config-operator |
default-scheduler |
machine-config-operator-7b75469658-j2dbc |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-f966fb6f8 |
SuccessfulCreate |
Created pod: catalog-operator-f966fb6f8-dwwm2 | |
openshift-cluster-storage-operator |
default-scheduler |
cluster-storage-operator-56d4b95494-7ff2l |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-cluster-storage-operator |
replicaset-controller |
cluster-storage-operator-56d4b95494 |
SuccessfulCreate |
Created pod: cluster-storage-operator-56d4b95494-7ff2l | |
openshift-cluster-machine-approver |
default-scheduler |
machine-approver-597b8f6cd6-wlpfn |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-machine-api |
replicaset-controller |
machine-api-operator-9dbb96f7 |
SuccessfulCreate |
Created pod: machine-api-operator-9dbb96f7-s66vj | |
openshift-insights |
replicaset-controller |
insights-operator-7dcf5bd85b |
SuccessfulCreate |
Created pod: insights-operator-7dcf5bd85b-chrmm | |
openshift-machine-api |
default-scheduler |
machine-api-operator-9dbb96f7-s66vj |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-insights |
default-scheduler |
insights-operator-7dcf5bd85b-chrmm |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
assisted-installer |
default-scheduler |
assisted-installer-controller-mzrkb |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-597b8f6cd6 |
SuccessfulCreate |
Created pod: machine-approver-597b8f6cd6-wlpfn | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6d4bdff5b8-xzw9t |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:201e8fc1896dadc01ce68cec4c7437f12ddc3ac35792cc4d193242b5c41f48e1" | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6d4bdff5b8-xzw9t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:201e8fc1896dadc01ce68cec4c7437f12ddc3ac35792cc4d193242b5c41f48e1" already present on machine | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6d4bdff5b8-xzw9t |
Started |
Started container config-sync-controllers | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6d4bdff5b8-xzw9t |
Created |
Created container: config-sync-controllers | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6d4bdff5b8-xzw9t |
Started |
Started container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6d4bdff5b8-xzw9t |
Created |
Created container: cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6d4bdff5b8-xzw9t |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:201e8fc1896dadc01ce68cec4c7437f12ddc3ac35792cc4d193242b5c41f48e1" in 2.677s (2.677s including waiting). Image size: 550463190 bytes. | |
openshift-cloud-controller-manager-operator |
master-2_d51f127d-cf09-4742-9a7c-320abe3a3339 |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-2_d51f127d-cf09-4742-9a7c-320abe3a3339 became leader | |
openshift-cloud-controller-manager-operator |
master-2_81cca2fa-3160-4cd8-aeca-04bc0bdf8557 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-2_81cca2fa-3160-4cd8-aeca-04bc0bdf8557 became leader | |
(x5) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-2 |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-2_openshift-machine-config-operator(f022eff2d978fee6b366ac18a80aa53c) |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-7876f99457 |
SuccessfulCreate |
Created pod: machine-approver-7876f99457-kpq7g | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled down replica set machine-approver-597b8f6cd6 to 0 from 1 | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-7876f99457 to 1 | |
openshift-cluster-machine-approver |
default-scheduler |
machine-approver-7876f99457-kpq7g |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-597b8f6cd6 |
SuccessfulDelete |
Deleted pod: machine-approver-597b8f6cd6-wlpfn | |
openshift-cluster-machine-approver |
default-scheduler |
machine-approver-597b8f6cd6-wlpfn |
FailedScheduling |
skip schedule deleting pod: openshift-cluster-machine-approver/machine-approver-597b8f6cd6-wlpfn | |
(x3) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6d4bdff5b8-xzw9t |
Started |
Started container kube-rbac-proxy |
(x5) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-1 |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-1_openshift-machine-config-operator(3273b5dc02e0d8cacbf64fe78c713d50) |
(x3) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6d4bdff5b8-xzw9t |
Created |
Created container: kube-rbac-proxy |
(x3) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6d4bdff5b8-xzw9t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine |
(x3) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6d4bdff5b8-xzw9t |
BackOff |
Back-off restarting failed container kube-rbac-proxy in pod cluster-cloud-controller-manager-operator-6d4bdff5b8-xzw9t_openshift-cloud-controller-manager-operator(78911cd1-bf7b-4ba1-8993-c10b848879bd) |
openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-568c655666 |
SuccessfulCreate |
Created pod: service-ca-operator-568c655666-t6c8q | |
openshift-service-ca-operator |
default-scheduler |
service-ca-operator-568c655666-t6c8q |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-network-operator |
kubelet |
network-operator-854f54f8c9-t6kgz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1656551c63dc1b09263ccc5fb52a13dff12d57e1c7510529789df1b41d253aa9" | |
openshift-kube-scheduler-operator |
default-scheduler |
openshift-kube-scheduler-operator-766d6b44f6-gtvcp |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-766d6b44f6 |
SuccessfulCreate |
Created pod: openshift-kube-scheduler-operator-766d6b44f6-gtvcp | |
openshift-kube-controller-manager-operator |
default-scheduler |
kube-controller-manager-operator-5d85974df9-ppzvt |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-77b56b6f4f |
SuccessfulCreate |
Created pod: cluster-olm-operator-77b56b6f4f-prtfl | |
openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-5d85974df9 |
SuccessfulCreate |
Created pod: kube-controller-manager-operator-5d85974df9-ppzvt | |
openshift-network-operator |
default-scheduler |
network-operator-854f54f8c9-t6kgz |
Scheduled |
Successfully assigned openshift-network-operator/network-operator-854f54f8c9-t6kgz to master-1 | |
openshift-cluster-olm-operator |
default-scheduler |
cluster-olm-operator-77b56b6f4f-prtfl |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-network-operator |
replicaset-controller |
network-operator-854f54f8c9 |
SuccessfulCreate |
Created pod: network-operator-854f54f8c9-t6kgz | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-6d4bdff5b8 |
SuccessfulDelete |
Deleted pod: cluster-cloud-controller-manager-operator-6d4bdff5b8-xzw9t | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-779749f859 to 1 | |
openshift-controller-manager-operator |
default-scheduler |
openshift-controller-manager-operator-5745565d84-5l45t |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-cloud-controller-manager-operator |
default-scheduler |
cluster-cloud-controller-manager-operator-779749f859-bscv5 |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-bscv5 to master-2 | |
openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-5745565d84 |
SuccessfulCreate |
Created pod: openshift-controller-manager-operator-5745565d84-5l45t | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-779749f859 |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-779749f859-bscv5 | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled down replica set cluster-cloud-controller-manager-operator-6d4bdff5b8 to 0 from 1 | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6d4bdff5b8-xzw9t |
Killing |
Stopping container config-sync-controllers | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6d4bdff5b8-xzw9t |
Killing |
Stopping container cluster-cloud-controller-manager | |
openshift-apiserver-operator |
default-scheduler |
openshift-apiserver-operator-7d88655794-dbtvc |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-etcd-operator |
replicaset-controller |
etcd-operator-6bddf7d79 |
SuccessfulCreate |
Created pod: etcd-operator-6bddf7d79-dtp9l | |
openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-7d88655794 |
SuccessfulCreate |
Created pod: openshift-apiserver-operator-7d88655794-dbtvc | |
openshift-etcd-operator |
default-scheduler |
etcd-operator-6bddf7d79-dtp9l |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-dns-operator |
replicaset-controller |
dns-operator-7769d9677 |
SuccessfulCreate |
Created pod: dns-operator-7769d9677-nh2qc | |
openshift-dns-operator |
default-scheduler |
dns-operator-7769d9677-nh2qc |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-779749f859-bscv5 |
Started |
Started container config-sync-controllers | |
openshift-authentication-operator |
default-scheduler |
authentication-operator-66df44bc95-gldlr |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-marketplace |
default-scheduler |
marketplace-operator-c4f798dd4-djh96 |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-marketplace |
replicaset-controller |
marketplace-operator-c4f798dd4 |
SuccessfulCreate |
Created pod: marketplace-operator-c4f798dd4-djh96 | |
openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-dcfdffd74 |
SuccessfulCreate |
Created pod: kube-storage-version-migrator-operator-dcfdffd74-ckmcc | |
openshift-kube-storage-version-migrator-operator |
default-scheduler |
kube-storage-version-migrator-operator-dcfdffd74-ckmcc |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-authentication-operator |
replicaset-controller |
authentication-operator-66df44bc95 |
SuccessfulCreate |
Created pod: authentication-operator-66df44bc95-gldlr | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-779749f859-bscv5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:201e8fc1896dadc01ce68cec4c7437f12ddc3ac35792cc4d193242b5c41f48e1" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-779749f859-bscv5 |
Created |
Created container: cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-779749f859-bscv5 |
Started |
Started container cluster-cloud-controller-manager | |
openshift-network-operator |
kubelet |
network-operator-854f54f8c9-t6kgz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1656551c63dc1b09263ccc5fb52a13dff12d57e1c7510529789df1b41d253aa9" in 3.443s (3.443s including waiting). Image size: 614682093 bytes. | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-1_79a466dd-6609-44f3-b0a7-171797f8c9ec became leader | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-network-operator |
job-controller |
mtu-prober |
SuccessfulCreate |
Created pod: mtu-prober-jqdjc | |
(x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-2 |
Started |
Started container kube-rbac-proxy-crio |
openshift-network-operator |
default-scheduler |
mtu-prober-jqdjc |
Scheduled |
Successfully assigned openshift-network-operator/mtu-prober-jqdjc to master-1 | |
(x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-2 |
Created |
Created container: kube-rbac-proxy-crio |
(x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine |
openshift-network-operator |
kubelet |
mtu-prober-jqdjc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1656551c63dc1b09263ccc5fb52a13dff12d57e1c7510529789df1b41d253aa9" already present on machine | |
openshift-network-operator |
kubelet |
mtu-prober-jqdjc |
Started |
Started container prober | |
openshift-network-operator |
kubelet |
mtu-prober-jqdjc |
Created |
Created container: prober | |
openshift-network-operator |
job-controller |
mtu-prober |
Completed |
Job completed | |
(x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine |
(x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-1 |
Started |
Started container kube-rbac-proxy-crio |
(x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-1 |
Created |
Created container: kube-rbac-proxy-crio |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-multus namespace | |
openshift-multus |
default-scheduler |
multus-tq8hl |
Scheduled |
Successfully assigned openshift-multus/multus-tq8hl to master-1 | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-g75pn | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-tq8hl | |
openshift-multus |
default-scheduler |
multus-g75pn |
Scheduled |
Successfully assigned openshift-multus/multus-g75pn to master-2 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbde693d384ae08cdaf9126a9a6359bb5515793f63108ef216cbddf1c995af3e" | |
openshift-multus |
default-scheduler |
network-metrics-daemon-b84p7 |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-b84p7 to master-2 | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-mgfql | |
openshift-multus |
default-scheduler |
network-metrics-daemon-8l654 |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-8l654 to master-1 | |
openshift-multus |
kubelet |
multus-g75pn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e" | |
openshift-multus |
default-scheduler |
multus-additional-cni-plugins-mgfql |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-mgfql to master-2 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbde693d384ae08cdaf9126a9a6359bb5515793f63108ef216cbddf1c995af3e" | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-b84p7 | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-8l654 | |
openshift-multus |
kubelet |
multus-tq8hl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e" | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-tn87t | |
openshift-multus |
default-scheduler |
multus-additional-cni-plugins-tn87t |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-tn87t to master-1 | |
openshift-multus |
default-scheduler |
multus-admission-controller-77b66fddc8-9npgz |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-multus |
default-scheduler |
multus-admission-controller-77b66fddc8-mgc7h |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbde693d384ae08cdaf9126a9a6359bb5515793f63108ef216cbddf1c995af3e" in 2.224s (2.224s including waiting). Image size: 530836538 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Created |
Created container: egress-router-binary-copy | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-77b66fddc8 to 2 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-77b66fddc8 |
SuccessfulCreate |
Created pod: multus-admission-controller-77b66fddc8-9npgz | |
openshift-multus |
replicaset-controller |
multus-admission-controller-77b66fddc8 |
SuccessfulCreate |
Created pod: multus-admission-controller-77b66fddc8-mgc7h | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6128c3fda0a374e4e705551260ee45b426a747e9d3e450d4ca1a3714fd404207" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Created |
Created container: egress-router-binary-copy | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ovn-kubernetes namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbde693d384ae08cdaf9126a9a6359bb5515793f63108ef216cbddf1c995af3e" in 2.61s (2.61s including waiting). Image size: 530836538 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6128c3fda0a374e4e705551260ee45b426a747e9d3e450d4ca1a3714fd404207" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-tq8hl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e" in 11.413s (11.413s including waiting). Image size: 1230574268 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6128c3fda0a374e4e705551260ee45b426a747e9d3e450d4ca1a3714fd404207" in 8.437s (8.437s including waiting). Image size: 684971018 bytes. | |
openshift-multus |
kubelet |
multus-tq8hl |
Started |
Started container kube-multus | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-host-network namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Created |
Created container: cni-plugins | |
openshift-multus |
kubelet |
multus-tq8hl |
Created |
Created container: kube-multus | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-ssgb2 | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-ssgb2 |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-ssgb2 to master-2 | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-864d695c77-zrhxj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-864d695c77-zrhxj |
Created |
Created container: kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-864d695c77-zrhxj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-control-plane-864d695c77-vbf9m |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-vbf9m to master-2 | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-control-plane-864d695c77-zrhxj |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-zrhxj to master-1 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c713df8493f490d2cd316861e6f63bc27078cda759dd9dd2817f101f233db28" | |
openshift-ovn-kubernetes |
deployment-controller |
ovnkube-control-plane |
ScalingReplicaSet |
Scaled up replica set ovnkube-control-plane-864d695c77 to 2 | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-864d695c77-zrhxj |
Started |
Started container kube-rbac-proxy | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-g2f76 | |
openshift-ovn-kubernetes |
replicaset-controller |
ovnkube-control-plane-864d695c77 |
SuccessfulCreate |
Created pod: ovnkube-control-plane-864d695c77-zrhxj | |
openshift-ovn-kubernetes |
replicaset-controller |
ovnkube-control-plane-864d695c77 |
SuccessfulCreate |
Created pod: ovnkube-control-plane-864d695c77-vbf9m | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-g2f76 |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-g2f76 to master-1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-diagnostics namespace | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-864d695c77-vbf9m |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-864d695c77-vbf9m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Started |
Started container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Created |
Created container: bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c713df8493f490d2cd316861e6f63bc27078cda759dd9dd2817f101f233db28" in 1.033s (1.033s including waiting). Image size: 404610285 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6128c3fda0a374e4e705551260ee45b426a747e9d3e450d4ca1a3714fd404207" in 9.516s (9.516s including waiting). Image size: 684971018 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Created |
Created container: cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-g75pn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e" in 13.106s (13.106s including waiting). Image size: 1230574268 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-864d695c77-vbf9m |
Started |
Started container kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-864d695c77-vbf9m |
Created |
Created container: kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" | |
openshift-multus |
kubelet |
multus-g75pn |
Created |
Created container: kube-multus | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c713df8493f490d2cd316861e6f63bc27078cda759dd9dd2817f101f233db28" | |
openshift-multus |
kubelet |
multus-g75pn |
Started |
Started container kube-multus | |
openshift-network-diagnostics |
default-scheduler |
network-check-source-967c7bb47-bzqnw |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-network-diagnostics |
replicaset-controller |
network-check-source-967c7bb47 |
SuccessfulCreate |
Created pod: network-check-source-967c7bb47-bzqnw | |
openshift-network-diagnostics |
deployment-controller |
network-check-source |
ScalingReplicaSet |
Scaled up replica set network-check-source-967c7bb47 to 1 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b95ed8eaa90077acc5910504a338c0b5eea8a9b6632868366d72d48a4b6f2c4" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b95ed8eaa90077acc5910504a338c0b5eea8a9b6632868366d72d48a4b6f2c4" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Started |
Started container bond-cni-plugin | |
openshift-network-diagnostics |
default-scheduler |
network-check-target-cb5bh |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-cb5bh to master-2 | |
openshift-network-diagnostics |
default-scheduler |
network-check-target-sndvg |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-sndvg to master-1 | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-sndvg | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-cb5bh | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c713df8493f490d2cd316861e6f63bc27078cda759dd9dd2817f101f233db28" in 1.639s (1.639s including waiting). Image size: 404610285 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Created |
Created container: bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Created |
Created container: routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b95ed8eaa90077acc5910504a338c0b5eea8a9b6632868366d72d48a4b6f2c4" in 857ms (857ms including waiting). Image size: 400384094 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Started |
Started container routeoverride-cni | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-node-identity namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b95ed8eaa90077acc5910504a338c0b5eea8a9b6632868366d72d48a4b6f2c4" in 3.108s (3.108s including waiting). Image size: 400384094 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Started |
Started container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Created |
Created container: routeoverride-cni | |
openshift-network-node-identity |
default-scheduler |
network-node-identity-5tzml |
Scheduled |
Successfully assigned openshift-network-node-identity/network-node-identity-5tzml to master-2 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb" | |
openshift-network-node-identity |
daemonset-controller |
network-node-identity |
SuccessfulCreate |
Created pod: network-node-identity-5tzml | |
openshift-network-node-identity |
daemonset-controller |
network-node-identity |
SuccessfulCreate |
Created pod: network-node-identity-rsr2v | |
openshift-network-node-identity |
kubelet |
network-node-identity-rsr2v |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" | |
openshift-network-node-identity |
default-scheduler |
network-node-identity-rsr2v |
Scheduled |
Successfully assigned openshift-network-node-identity/network-node-identity-rsr2v to master-1 | |
openshift-network-node-identity |
kubelet |
network-node-identity-5tzml |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" | |
(x8) | openshift-cluster-version |
kubelet |
cluster-version-operator-55bd67947c-872k9 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-864d695c77-zrhxj |
Started |
Started container ovnkube-cluster-manager | |
openshift-network-node-identity |
kubelet |
network-node-identity-5tzml |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" in 10.028s (10.028s including waiting). Image size: 1565215279 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb" in 10.469s (10.469s including waiting). Image size: 869140966 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" in 15.118s (15.118s including waiting). Image size: 1565215279 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Created |
Created container: kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Started |
Started container kubecfg-setup | |
(x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-779749f859-bscv5 |
Started |
Started container kube-rbac-proxy |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb" in 10.44s (10.44s including waiting). Image size: 869140966 bytes. | |
(x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-779749f859-bscv5 |
Created |
Created container: kube-rbac-proxy |
(x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-779749f859-bscv5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-864d695c77-zrhxj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" in 15.786s (15.786s including waiting). Image size: 1565215279 bytes. | |
openshift-network-node-identity |
master-2_783721c2-7cb8-41f2-9433-d0f5186e2d31 |
ovnkube-identity |
LeaderElection |
master-2_783721c2-7cb8-41f2-9433-d0f5186e2d31 became leader | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Created |
Created container: whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Started |
Started container whereabouts-cni-bincopy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-864d695c77-zrhxj |
Created |
Created container: ovnkube-cluster-manager | |
openshift-network-node-identity |
kubelet |
network-node-identity-rsr2v |
Started |
Started container approver | |
openshift-network-node-identity |
kubelet |
network-node-identity-rsr2v |
Created |
Created container: approver | |
openshift-network-node-identity |
kubelet |
network-node-identity-rsr2v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-rsr2v |
Started |
Started container webhook | |
openshift-network-node-identity |
kubelet |
network-node-identity-rsr2v |
Created |
Created container: webhook | |
openshift-network-node-identity |
kubelet |
network-node-identity-rsr2v |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" in 10.286s (10.286s including waiting). Image size: 1565215279 bytes. | |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-864d695c77-vbf9m became leader | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" in 15.842s (15.842s including waiting). Image size: 1565215279 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Created |
Created container: kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-864d695c77-vbf9m |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" in 14.95s (14.95s including waiting). Image size: 1565215279 bytes. | |
openshift-network-node-identity |
kubelet |
network-node-identity-5tzml |
Started |
Started container webhook | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Created |
Created container: whereabouts-cni-bincopy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-864d695c77-vbf9m |
Created |
Created container: ovnkube-cluster-manager | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-864d695c77-vbf9m |
Started |
Started container ovnkube-cluster-manager | |
openshift-network-node-identity |
kubelet |
network-node-identity-5tzml |
Created |
Created container: webhook | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Created |
Created container: northd | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Created |
Created container: whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Started |
Started container whereabouts-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Started |
Started container ovn-acl-logging | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Started |
Started container whereabouts-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Created |
Created container: ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Created |
Created container: nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Created |
Created container: ovn-controller | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Created |
Created container: whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Created |
Created container: ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Created |
Created container: ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Created |
Created container: northd | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-mgfql |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tn87t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Created |
Created container: nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Started |
Started container sbdb | |
(x7) | openshift-multus |
kubelet |
network-metrics-daemon-8l654 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
(x7) | openshift-multus |
kubelet |
network-metrics-daemon-b84p7 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-g2f76 |
Created |
Created container: sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
(x18) | openshift-multus |
kubelet |
network-metrics-daemon-8l654 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Started |
Started container sbdb | |
(x18) | openshift-multus |
kubelet |
network-metrics-daemon-b84p7 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ssgb2 |
Created |
Created container: sbdb | |
default |
ovnkube-csr-approver-controller |
csr-7kzvn |
CSRApproved |
CSR "csr-7kzvn" has been approved | |
openshift-multus |
ovnk-controlplane |
network-metrics-daemon-8l654 |
ErrorAddingResource |
addLogicalPort failed for openshift-multus/network-metrics-daemon-8l654: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "master-1" | |
default |
ovnkube-csr-approver-controller |
csr-wmsbf |
CSRApproved |
CSR "csr-wmsbf" has been approved | |
openshift-multus |
ovnk-controlplane |
network-metrics-daemon-8l654 |
ErrorUpdatingResource |
addLogicalPort failed for openshift-multus/network-metrics-daemon-8l654: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "master-1" | |
default |
ovnk-controlplane |
master-1 |
ErrorAddingResource |
[k8s.ovn.org/node-chassis-id annotation not found for node master-1, k8s.ovn.org/l3-gateway-config annotation not found for node "master-1", failed to update chassis to local for local node master-1, error: failed to parse node chassis-id for node - master-1, error: k8s.ovn.org/node-chassis-id annotation not found for node master-1] | |
openshift-network-diagnostics |
ovnk-controlplane |
network-check-target-sndvg |
ErrorUpdatingResource |
addLogicalPort failed for openshift-network-diagnostics/network-check-target-sndvg: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "master-1" | |
openshift-network-diagnostics |
ovnk-controlplane |
network-check-target-sndvg |
ErrorAddingResource |
addLogicalPort failed for openshift-network-diagnostics/network-check-target-sndvg: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "master-1" | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulDelete |
Deleted pod: ovnkube-node-g2f76 | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulDelete |
Deleted pod: ovnkube-node-ssgb2 | |
default |
ovnk-controlplane |
master-2 |
ErrorAddingResource |
[k8s.ovn.org/node-chassis-id annotation not found for node master-2, k8s.ovn.org/l3-gateway-config annotation not found for node "master-2", failed to update chassis to local for local node master-2, error: failed to parse node chassis-id for node - master-2, error: k8s.ovn.org/node-chassis-id annotation not found for node master-2] | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-qvfnh |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-qvfnh to master-1 | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-qvfnh | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Created |
Created container: kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-4cthp |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-4cthp to master-2 | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-4cthp | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Created |
Created container: kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Created |
Created container: nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Created |
Created container: northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Created |
Created container: ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Created |
Created container: ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Created |
Created container: ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Created |
Created container: ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Created |
Created container: nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Created |
Created container: northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Created |
Created container: sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Created |
Created container: sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4cthp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qvfnh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
(x7) | openshift-network-diagnostics |
kubelet |
network-check-target-cb5bh |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-spkpp" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
(x18) | openshift-network-diagnostics |
kubelet |
network-check-target-sndvg |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
(x7) | openshift-network-diagnostics |
kubelet |
network-check-target-sndvg |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-mbd6g" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
(x18) | openshift-network-diagnostics |
kubelet |
network-check-target-cb5bh |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
default |
ovnkube-csr-approver-controller |
csr-4llmx |
CSRApproved |
CSR "csr-4llmx" has been approved | |
default |
ovnkube-csr-approver-controller |
csr-jnwsd |
CSRApproved |
CSR "csr-jnwsd" has been approved | |
openshift-multus |
default-scheduler |
multus-admission-controller-77b66fddc8-9npgz |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-77b66fddc8-9npgz to master-1 | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-m6qfh | |
assisted-installer |
default-scheduler |
assisted-installer-controller-mzrkb |
Scheduled |
Successfully assigned assisted-installer/assisted-installer-controller-mzrkb to master-1 | |
openshift-service-ca-operator |
default-scheduler |
service-ca-operator-568c655666-t6c8q |
Scheduled |
Successfully assigned openshift-service-ca-operator/service-ca-operator-568c655666-t6c8q to master-1 | |
openshift-etcd-operator |
default-scheduler |
etcd-operator-6bddf7d79-dtp9l |
Scheduled |
Successfully assigned openshift-etcd-operator/etcd-operator-6bddf7d79-dtp9l to master-1 | |
openshift-controller-manager-operator |
default-scheduler |
openshift-controller-manager-operator-5745565d84-5l45t |
Scheduled |
Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-5l45t to master-1 | |
openshift-apiserver-operator |
default-scheduler |
openshift-apiserver-operator-7d88655794-dbtvc |
Scheduled |
Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-dbtvc to master-1 | |
openshift-machine-api |
default-scheduler |
cluster-autoscaler-operator-7ff449c7c5-nmpfk |
Scheduled |
Successfully assigned openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-nmpfk to master-1 | |
openshift-ingress-operator |
default-scheduler |
ingress-operator-766ddf4575-xhdjt |
Scheduled |
Successfully assigned openshift-ingress-operator/ingress-operator-766ddf4575-xhdjt to master-1 | |
openshift-machine-api |
default-scheduler |
control-plane-machine-set-operator-84f9cbd5d9-n87md |
Scheduled |
Successfully assigned openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-n87md to master-1 | |
openshift-machine-config-operator |
default-scheduler |
machine-config-operator-7b75469658-j2dbc |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-operator-7b75469658-j2dbc to master-1 | |
openshift-network-operator |
default-scheduler |
iptables-alerter-m6qfh |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-m6qfh to master-1 | |
openshift-kube-controller-manager-operator |
default-scheduler |
kube-controller-manager-operator-5d85974df9-ppzvt |
Scheduled |
Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-ppzvt to master-1 | |
openshift-operator-lifecycle-manager |
default-scheduler |
package-server-manager-798cc87f55-j2bjv |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-j2bjv to master-1 | |
openshift-operator-lifecycle-manager |
default-scheduler |
olm-operator-867f8475d9-fl56c |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/olm-operator-867f8475d9-fl56c to master-1 | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-5b5dd85dcc-cxtgh |
Scheduled |
Successfully assigned openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-cxtgh to master-1 | |
openshift-cluster-storage-operator |
default-scheduler |
cluster-storage-operator-56d4b95494-7ff2l |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-7ff2l to master-1 | |
openshift-operator-lifecycle-manager |
default-scheduler |
catalog-operator-f966fb6f8-dwwm2 |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-dwwm2 to master-1 | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-operator-7ff96dd767-9htmf |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-9htmf to master-1 | |
openshift-dns-operator |
default-scheduler |
dns-operator-7769d9677-nh2qc |
Scheduled |
Successfully assigned openshift-dns-operator/dns-operator-7769d9677-nh2qc to master-1 | |
openshift-cluster-olm-operator |
default-scheduler |
cluster-olm-operator-77b56b6f4f-prtfl |
Scheduled |
Successfully assigned openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-prtfl to master-1 | |
openshift-config-operator |
default-scheduler |
openshift-config-operator-55957b47d5-vtkr6 |
Scheduled |
Successfully assigned openshift-config-operator/openshift-config-operator-55957b47d5-vtkr6 to master-1 | |
openshift-marketplace |
default-scheduler |
marketplace-operator-c4f798dd4-djh96 |
Scheduled |
Successfully assigned openshift-marketplace/marketplace-operator-c4f798dd4-djh96 to master-1 | |
openshift-insights |
default-scheduler |
insights-operator-7dcf5bd85b-chrmm |
Scheduled |
Successfully assigned openshift-insights/insights-operator-7dcf5bd85b-chrmm to master-1 | |
openshift-kube-apiserver-operator |
default-scheduler |
kube-apiserver-operator-68f5d95b74-bqdtw |
Scheduled |
Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-bqdtw to master-1 | |
openshift-kube-storage-version-migrator-operator |
default-scheduler |
kube-storage-version-migrator-operator-dcfdffd74-ckmcc |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ckmcc to master-1 | |
openshift-cloud-credential-operator |
default-scheduler |
cloud-credential-operator-5cf49b6487-4cf2d |
Scheduled |
Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-4cf2d to master-1 | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-7866c9bdf4-d4dlj |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-d4dlj to master-1 | |
openshift-machine-api |
default-scheduler |
machine-api-operator-9dbb96f7-s66vj |
Scheduled |
Successfully assigned openshift-machine-api/machine-api-operator-9dbb96f7-s66vj to master-1 | |
openshift-authentication-operator |
default-scheduler |
authentication-operator-66df44bc95-gldlr |
Scheduled |
Successfully assigned openshift-authentication-operator/authentication-operator-66df44bc95-gldlr to master-1 | |
openshift-image-registry |
default-scheduler |
cluster-image-registry-operator-6b8674d7ff-gspqw |
Scheduled |
Successfully assigned openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-gspqw to master-1 | |
openshift-multus |
default-scheduler |
multus-admission-controller-77b66fddc8-mgc7h |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-77b66fddc8-mgc7h to master-1 | |
openshift-machine-api |
default-scheduler |
cluster-baremetal-operator-6c8fbf4498-kcckh |
Scheduled |
Successfully assigned openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-kcckh to master-1 | |
openshift-kube-scheduler-operator |
default-scheduler |
openshift-kube-scheduler-operator-766d6b44f6-gtvcp |
Scheduled |
Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-gtvcp to master-1 | |
openshift-cluster-machine-approver |
default-scheduler |
machine-approver-7876f99457-kpq7g |
Scheduled |
Successfully assigned openshift-cluster-machine-approver/machine-approver-7876f99457-kpq7g to master-1 | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-operator-7ff96dd767-9htmf |
AddedInterface |
Add eth0 [10.128.0.9/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
multus |
kube-controller-manager-operator-5d85974df9-ppzvt |
AddedInterface |
Add eth0 [10.128.0.21/23] from ovn-kubernetes | |
openshift-authentication-operator |
multus |
authentication-operator-66df44bc95-gldlr |
AddedInterface |
Add eth0 [10.128.0.32/23] from ovn-kubernetes | |
openshift-etcd-operator |
multus |
etcd-operator-6bddf7d79-dtp9l |
AddedInterface |
Add eth0 [10.128.0.29/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
multus |
openshift-controller-manager-operator-5745565d84-5l45t |
AddedInterface |
Add eth0 [10.128.0.34/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5745565d84-5l45t |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f425875bda87dc167d613efc88c56256e48364b73174d1392f7d23301baec0b" | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-766d6b44f6-gtvcp |
Failed |
Error: ErrImagePull | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-766d6b44f6-gtvcp |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642": pull QPS exceeded | |
openshift-kube-scheduler-operator |
multus |
openshift-kube-scheduler-operator-766d6b44f6-gtvcp |
AddedInterface |
Add eth0 [10.128.0.23/23] from ovn-kubernetes | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-7d88655794-dbtvc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ef76839c19a20a0e01cdd2b9fd53ae31937d6f478b2c2343679099985fe9e47" | |
openshift-apiserver-operator |
multus |
openshift-apiserver-operator-7d88655794-dbtvc |
AddedInterface |
Add eth0 [10.128.0.24/23] from ovn-kubernetes | |
openshift-service-ca-operator |
multus |
service-ca-operator-568c655666-t6c8q |
AddedInterface |
Add eth0 [10.128.0.22/23] from ovn-kubernetes | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-568c655666-t6c8q |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97de153ac76971fa69d4af7166c63416fbe37d759deb7833340c1c39d418b745" | |
openshift-etcd-operator |
kubelet |
etcd-operator-6bddf7d79-dtp9l |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-77b56b6f4f-prtfl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76a6a279901a441ec7d5e67c384c86cd72feaa38e08365ec1eed45fb11b5099f" | |
openshift-network-operator |
kubelet |
iptables-alerter-m7hdw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a" | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-m7hdw | |
openshift-network-operator |
default-scheduler |
iptables-alerter-m7hdw |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-m7hdw to master-2 | |
openshift-insights |
multus |
insights-operator-7dcf5bd85b-chrmm |
AddedInterface |
Add eth0 [10.128.0.28/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-5d85974df9-ppzvt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" | |
openshift-insights |
kubelet |
insights-operator-7dcf5bd85b-chrmm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c3058c461907ec5ff06a628e935722d7ec8bf86fa90b95269372a6dc41444ce" | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7ff96dd767-9htmf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05bf4bdb9af40d949fa343ad1fd1d79d032d0bd0eb188ed33fbdceeb5056ce0" | |
openshift-network-operator |
kubelet |
iptables-alerter-m6qfh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a" | |
openshift-cluster-olm-operator |
multus |
cluster-olm-operator-77b56b6f4f-prtfl |
AddedInterface |
Add eth0 [10.128.0.31/23] from ovn-kubernetes | |
openshift-config-operator |
kubelet |
openshift-config-operator-55957b47d5-vtkr6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf52105972e412c56b2dda0ad04d6277741e50a95e9aad0510f790d075d5148a" | |
openshift-config-operator |
multus |
openshift-config-operator-55957b47d5-vtkr6 |
AddedInterface |
Add eth0 [10.128.0.20/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-68f5d95b74-bqdtw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" | |
openshift-kube-apiserver-operator |
multus |
kube-apiserver-operator-68f5d95b74-bqdtw |
AddedInterface |
Add eth0 [10.128.0.10/23] from ovn-kubernetes | |
assisted-installer |
kubelet |
assisted-installer-controller-mzrkb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2fe368c29648f07f2b0f3849feef0eda2000555e91d268e2b5a19526179619c" | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-56d4b95494-7ff2l |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d8df789ec16971dc14423860f7b20b9ee27d926e4e5be632714cadc15e7f9b32" | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-dcfdffd74-ckmcc |
Failed |
Error: ErrImagePull | |
openshift-cluster-storage-operator |
multus |
cluster-storage-operator-56d4b95494-7ff2l |
AddedInterface |
Add eth0 [10.128.0.30/23] from ovn-kubernetes | |
openshift-authentication-operator |
kubelet |
authentication-operator-66df44bc95-gldlr |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5f27555b2adaa9cd82922dde7517c78eac05afdd090d572e62a9a425b42a7d": pull QPS exceeded | |
openshift-authentication-operator |
kubelet |
authentication-operator-66df44bc95-gldlr |
Failed |
Error: ErrImagePull | |
openshift-kube-storage-version-migrator-operator |
multus |
kube-storage-version-migrator-operator-dcfdffd74-ckmcc |
AddedInterface |
Add eth0 [10.128.0.33/23] from ovn-kubernetes | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-dcfdffd74-ckmcc |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b9e086347802546d8040d17296f434edf088305103b874c900beee3a3575c34": pull QPS exceeded | |
(x2) | openshift-authentication-operator |
kubelet |
authentication-operator-66df44bc95-gldlr |
BackOff |
Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5f27555b2adaa9cd82922dde7517c78eac05afdd090d572e62a9a425b42a7d" |
(x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-766d6b44f6-gtvcp |
BackOff |
Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" |
(x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-766d6b44f6-gtvcp |
Failed |
Error: ImagePullBackOff |
(x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-dcfdffd74-ckmcc |
Failed |
Error: ImagePullBackOff |
(x2) | openshift-authentication-operator |
kubelet |
authentication-operator-66df44bc95-gldlr |
Failed |
Error: ImagePullBackOff |
(x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-dcfdffd74-ckmcc |
BackOff |
Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b9e086347802546d8040d17296f434edf088305103b874c900beee3a3575c34" |
openshift-network-operator |
kubelet |
iptables-alerter-m7hdw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a" in 3.191s (3.191s including waiting). Image size: 575181628 bytes. | |
(x7) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-779749f859-bscv5 |
BackOff |
Back-off restarting failed container kube-rbac-proxy in pod cluster-cloud-controller-manager-operator-779749f859-bscv5_openshift-cloud-controller-manager-operator(18346e46-a062-4e0d-b90a-c05646a46c7e) |
openshift-config-operator |
kubelet |
openshift-config-operator-55957b47d5-vtkr6 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf52105972e412c56b2dda0ad04d6277741e50a95e9aad0510f790d075d5148a" in 4.839s (4.839s including waiting). Image size: 431673420 bytes. | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-77b56b6f4f-prtfl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76a6a279901a441ec7d5e67c384c86cd72feaa38e08365ec1eed45fb11b5099f" in 6.24s (6.24s including waiting). Image size: 441083195 bytes. | |
openshift-network-operator |
kubelet |
iptables-alerter-m7hdw |
Started |
Started container iptables-alerter | |
openshift-network-operator |
kubelet |
iptables-alerter-m7hdw |
Created |
Created container: iptables-alerter | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-7d88655794-dbtvc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ef76839c19a20a0e01cdd2b9fd53ae31937d6f478b2c2343679099985fe9e47" in 8.887s (8.887s including waiting). Image size: 505315113 bytes. | |
openshift-config-operator |
kubelet |
openshift-config-operator-55957b47d5-vtkr6 |
Started |
Started container openshift-api | |
assisted-installer |
kubelet |
assisted-installer-controller-mzrkb |
Started |
Started container assisted-installer-controller | |
openshift-etcd-operator |
kubelet |
etcd-operator-6bddf7d79-dtp9l |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" in 8.636s (8.637s including waiting). Image size: 511412209 bytes. | |
assisted-installer |
kubelet |
assisted-installer-controller-mzrkb |
Created |
Created container: assisted-installer-controller | |
openshift-insights |
kubelet |
insights-operator-7dcf5bd85b-chrmm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c3058c461907ec5ff06a628e935722d7ec8bf86fa90b95269372a6dc41444ce" in 8.943s (8.943s including waiting). Image size: 497698695 bytes. | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7ff96dd767-9htmf |
Started |
Started container csi-snapshot-controller-operator | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7ff96dd767-9htmf |
Created |
Created container: csi-snapshot-controller-operator | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5745565d84-5l45t |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f425875bda87dc167d613efc88c56256e48364b73174d1392f7d23301baec0b" in 8.813s (8.814s including waiting). Image size: 501010081 bytes. | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7ff96dd767-9htmf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05bf4bdb9af40d949fa343ad1fd1d79d032d0bd0eb188ed33fbdceeb5056ce0" in 8.847s (8.847s including waiting). Image size: 499517132 bytes. | |
openshift-config-operator |
kubelet |
openshift-config-operator-55957b47d5-vtkr6 |
Created |
Created container: openshift-api | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-5d85974df9-ppzvt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" in 8.836s (8.836s including waiting). Image size: 501914388 bytes. | |
openshift-network-operator |
kubelet |
iptables-alerter-m6qfh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a" in 8.779s (8.779s including waiting). Image size: 575181628 bytes. | |
assisted-installer |
kubelet |
assisted-installer-controller-mzrkb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2fe368c29648f07f2b0f3849feef0eda2000555e91d268e2b5a19526179619c" in 8.71s (8.71s including waiting). Image size: 680965375 bytes. | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-56d4b95494-7ff2l |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d8df789ec16971dc14423860f7b20b9ee27d926e4e5be632714cadc15e7f9b32" in 8.455s (8.455s including waiting). Image size: 506615759 bytes. | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-68f5d95b74-bqdtw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" in 9.045s (9.045s including waiting). Image size: 508004341 bytes. | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-77b56b6f4f-prtfl |
Created |
Created container: copy-catalogd-manifests | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-77b56b6f4f-prtfl |
Started |
Started container copy-catalogd-manifests | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-568c655666-t6c8q |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97de153ac76971fa69d4af7166c63416fbe37d759deb7833340c1c39d418b745" in 8.72s (8.72s including waiting). Image size: 501585296 bytes. | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"}],status.versions changed from [] to [{"operator" "4.18.25"}] | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-568c655666-t6c8q_8fd80cdb-77ba-4cdb-9c1f-bfd4d8af0ec2 became leader | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}] | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded set to False ("APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found"),Progressing set to False ("All is well"),Available set to False ("APIServicesAvailable: endpoints \"api\" not found"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.18.25"}] | |
(x2) | openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "operator" changed from "" to "4.18.25" |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-56d4b95494-7ff2l_74691020-956f-410a-a50c-69fc590c0a0f became leader | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-7d88655794-dbtvc_7605e3d7-1db9-4758-bf78-da5cda6bd830 became leader | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorVersionChanged |
clusteroperator/storage version "operator" changed from "" to "4.18.25" | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: No default StorageClass for this platform"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded changed from Unknown to False ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-5d85974df9-ppzvt_35d8db69-7810-4afb-bc1a-3ad82fcb057a became leader | |
openshift-config-operator |
kubelet |
openshift-config-operator-55957b47d5-vtkr6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa10afc83b17b0d76fcff8963f51e62ae851f145cd6c27f61a0604e0c713fe3a" | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-77b56b6f4f-prtfl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94bcc0ff0f9ec7df4aeb53fe4bf0310e26cb7b40bdf772efc95a7ccfcfe69721" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-6bddf7d79-dtp9l_afbef432-874a-483d-8ea2-1b411d86cf36 became leader | |
assisted-installer |
assisted-installer-controller |
AssistedControllerIsReady |
Assisted controller managed to connect to assisted service and kube-apiserver and is ready to start | ||
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-5745565d84-5l45t_a174722d-0817-4510-8dd8-3c835843f010 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-68f5d95b74-bqdtw_d90dbb1e-b1da-4647-990b-39fd841b4c9a became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-7ff96dd767-9htmf_b684dcd2-0d50-47da-b0ed-f9f2bb6a9dd2 became leader | |
assisted-installer |
kubelet |
master-2-debug-xl9pk |
Pulling |
Pulling image "registry.redhat.io/rhel9/support-tools" | |
assisted-installer |
kubelet |
master-1-debug-qq2pg |
Pulling |
Pulling image "registry.redhat.io/rhel9/support-tools" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-route-controller-manager because it was missing | |
(x2) | openshift-cluster-storage-operator |
controllermanager |
csi-snapshot-controller-pdb |
NoPods |
No matching pods found |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded set to False ("All is well"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}] | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/csi-snapshot-controller-pdb -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller |
csi-snapshot-controller-operator |
DeploymentCreated |
Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-route-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager namespace | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ServiceAccountCreated |
Created ServiceAccount/service-ca -n openshift-service-ca because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:deployer because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:deployer because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-5d9b59775c to 2 | |
(x7) | openshift-controller-manager |
replicaset-controller |
controller-manager-5d9b59775c |
FailedCreate |
Error creating: pods "controller-manager-5d9b59775c-" is forbidden: error looking up service account openshift-controller-manager/openshift-controller-manager-sa: serviceaccount "openshift-controller-manager-sa" not found |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-controller-manager because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-serviceaccountissuercontroller |
kube-apiserver-operator |
ServiceAccountIssuer |
Issuer set to default value "https://kubernetes.default.svc" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-high-cpu-usage-alert-controller-highcpuusagealertcontroller |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/cpu-utilization -n openshift-kube-apiserver because it was missing | |
(x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodeObserved |
Observed new master node master-2 |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
(x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodeObserved |
Observed new master node master-1 |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-ddd7d64cd |
SuccessfulCreate |
Created pod: csi-snapshot-controller-ddd7d64cd-5s4kt | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-ddd7d64cd |
SuccessfulCreate |
Created pod: csi-snapshot-controller-ddd7d64cd-hph6v | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-ddd7d64cd to 2 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreateFailed |
Failed to create Deployment.apps/route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
NamespaceCreated |
Created Namespace/openshift-service-ca because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
ServiceAccountCreated |
Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to BuildCSIVolumes=true | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "build": map[string]any{ + "buildDefaults": map[string]any{"resources": map[string]any{}}, + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a435ee2ec"...), + }, + }, + "controllers": []any{ + string("openshift.io/build"), string("openshift.io/build-config-change"), + string("openshift.io/builder-rolebindings"), + string("openshift.io/builder-serviceaccount"), + string("-openshift.io/default-rolebindings"), string("openshift.io/deployer"), + string("openshift.io/deployer-rolebindings"), + string("openshift.io/deployer-serviceaccount"), + string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), + string("openshift.io/image-puller-rolebindings"), + string("openshift.io/image-signature-import"), + string("openshift.io/image-trigger"), string("openshift.io/ingress-ip"), + string("openshift.io/ingress-to-route"), + string("openshift.io/origin-namespace"), ..., + }, + "deployer": map[string]any{ + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ac368a7ef"...), + }, + }, + "featureGates": []any{string("BuildCSIVolumes=true")}, + "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, } | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded set to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.18.25"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-ddd7d64cd-hph6v |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eeb8312c455dd728870a6332c7e36e9068f6031127ce3e481a9a1131da527265" | |
(x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorVersionChanged |
clusteroperator/etcd version "raw-internal" changed from "" to "4.18.25" |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-ddd7d64cd-hph6v |
AddedInterface |
Add eth0 [10.129.0.5/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-ddd7d64cd-hph6v |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-hph6v to master-2 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-ddd7d64cd-5s4kt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eeb8312c455dd728870a6332c7e36e9068f6031127ce3e481a9a1131da527265" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-ddd7d64cd-5s4kt |
AddedInterface |
Add eth0 [10.128.0.36/23] from ovn-kubernetes | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodeObserved |
Observed new master node master-2 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
CABundleUpdateRequired |
"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert: configmap doesn't exist | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodeObserved |
Observed new master node master-1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.18.25" | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-ddd7d64cd-5s4kt |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-5s4kt to master-1 | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodeObserved |
Observed new master node master-1 |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodeObserved |
Observed new master node master-2 |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded set to False ("NodeControllerDegraded: All master nodes are ready"),Progressing set to False ("All is well"),Available set to Unknown (""),Upgradeable set to True ("KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced."),EvaluationConditionsDetected set to False ("All is well"),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.25"}] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.18.25" | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodesReadyChanged |
All master nodes are ready |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.25"}] | |
(x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodesReadyChanged |
All master nodes are ready |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/openshift-global-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
(x2) | openshift-controller-manager |
kubelet |
controller-manager-5d9b59775c-5nrm2 |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from Unknown to False ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated extendedArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/kube-apiserver-guard-pdb -n openshift-kube-apiserver because it was missing | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-7f89f9db8c to 2 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7f89f9db8c |
SuccessfulCreate |
Created pod: route-controller-manager-7f89f9db8c-dx7pm | |
(x2) | openshift-kube-controller-manager |
controllermanager |
kube-controller-manager-guard-pdb |
NoPods |
No matching pods found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "extendedArguments": map[string]any{ +Â "cluster-cidr": []any{string("10.128.0.0/14")}, +Â "cluster-name": []any{string("ocp-znddg")}, +Â "feature-gates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +Â }, +Â "service-cluster-ip-range": []any{string("172.30.0.0/16")}, +Â }, +Â "featureGates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), +Â string("DisableKubeletCloudCredentialProviders=true"), +Â string("GCPLabelsTags=true"), string("HardwareSpeed=true"), +Â string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), +Â string("ManagedBootImages=true"), string("ManagedBootImagesAWS=true"), +Â string("MultiArchInstallAWS=true"), ..., +Â }, +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5d9b59775c |
SuccessfulCreate |
Created pod: controller-manager-5d9b59775c-x2cz2 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5d9b59775c |
SuccessfulDelete |
Deleted pod: controller-manager-5d9b59775c-5nrm2 | |
(x2) | openshift-controller-manager |
kubelet |
controller-manager-5d9b59775c-5nrm2 |
FailedMount |
MountVolume.SetUp failed for volume "config" : configmap "config" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/kube-controller-manager-guard-pdb -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-5d9b59775c-x2cz2 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-5d9b59775c-x2cz2 to master-1 | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
SecretCreated |
Created Secret/signing-key -n openshift-service-ca because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "controlPlane": map[string]any{"replicas": float64(3)}, +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 2 nodes are at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-5d9b59775c-5nrm2 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-5d9b59775c-5nrm2 to master-2 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-controller-manager because it was missing | |
(x2) | openshift-kube-apiserver |
controllermanager |
kube-apiserver-guard-pdb |
NoPods |
No matching pods found |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-5d9b59775c to 1 from 2 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5d9b59775c |
SuccessfulCreate |
Created pod: controller-manager-5d9b59775c-5nrm2 | |
(x2) | openshift-controller-manager |
kubelet |
controller-manager-5d9b59775c-x2cz2 |
FailedMount |
MountVolume.SetUp failed for volume "config" : configmap "config" not found |
(x2) | openshift-controller-manager |
kubelet |
controller-manager-5d9b59775c-x2cz2 |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 2 nodes are at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0") | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
DeploymentCreated |
Created Deployment.apps/service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator-resource-sync-controller-resourcesynccontroller |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-config-managed because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
TargetUpdateRequired |
"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" | |
(x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-ddd7d64cd-hph6v |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eeb8312c455dd728870a6332c7e36e9068f6031127ce3e481a9a1131da527265" in 1.468s (1.468s including waiting). Image size: 456743409 bytes. | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-6687f866cc to 1 from 0 | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-77b56b6f4f-prtfl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:208d81ddcca0864f3a225e11a2fdcf7c67d32bae142bd9a9d154a76cffea08e7" | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-77b56b6f4f-prtfl |
Started |
Started container copy-operator-controller-manifests | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-77b56b6f4f-prtfl |
Created |
Created container: copy-operator-controller-manifests | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-77b56b6f4f-prtfl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94bcc0ff0f9ec7df4aeb53fe4bf0310e26cb7b40bdf772efc95a7ccfcfe69721" in 2.795s (2.796s including waiting). Image size: 488102305 bytes. | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-ddd7d64cd-hph6v |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-ddd7d64cd-hph6v became leader | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6687f866cc |
SuccessfulCreate |
Created pod: controller-manager-6687f866cc-2f4dq | |
(x3) | openshift-controller-manager |
kubelet |
controller-manager-5d9b59775c-5nrm2 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-7f89f9db8c-dx7pm |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-7f89f9db8c-dx7pm to master-2 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metric-serving-ca -n openshift-etcd-operator because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-6687f866cc-2f4dq |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "apiServerArguments": map[string]any{ + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + }, + "projectConfig": map[string]any{"projectRequestMessage": string("")}, + "routingConfig": map[string]any{"subdomain": string("apps.ocp.openstack.lab")}, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, + "storageConfig": map[string]any{"urls": []any{string("https://192.168.34.10:2379")}}, } | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.34.10:2379 | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-7f89f9db8c-j4hd5 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-7f89f9db8c-j4hd5 to master-1 | |
openshift-config-operator |
kubelet |
openshift-config-operator-55957b47d5-vtkr6 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa10afc83b17b0d76fcff8963f51e62ae851f145cd6c27f61a0604e0c713fe3a" in 2.789s (2.789s including waiting). Image size: 489030103 bytes. | |
openshift-network-operator |
kubelet |
iptables-alerter-m6qfh |
Created |
Created container: iptables-alerter | |
(x3) | openshift-controller-manager |
kubelet |
controller-manager-5d9b59775c-5nrm2 |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
(x2) | openshift-authentication-operator |
kubelet |
authentication-operator-66df44bc95-gldlr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5f27555b2adaa9cd82922dde7517c78eac05afdd090d572e62a9a425b42a7d" |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7f89f9db8c |
SuccessfulCreate |
Created pod: route-controller-manager-7f89f9db8c-j4hd5 | |
openshift-service-ca |
deployment-controller |
service-ca |
ScalingReplicaSet |
Scaled up replica set service-ca-64446499c7 to 1 | |
openshift-network-operator |
kubelet |
iptables-alerter-m6qfh |
Started |
Started container iptables-alerter | |
openshift-service-ca |
replicaset-controller |
service-ca-64446499c7 |
SuccessfulCreate |
Created pod: service-ca-64446499c7-ghfpb | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
RoutingConfigSubdomainChanged |
Domain changed from "" to "apps.ocp.openstack.lab" | |
openshift-service-ca |
default-scheduler |
service-ca-64446499c7-ghfpb |
Scheduled |
Successfully assigned openshift-service-ca/service-ca-64446499c7-ghfpb to master-2 | |
openshift-service-ca |
multus |
service-ca-64446499c7-ghfpb |
AddedInterface |
Add eth0 [10.129.0.8/23] from ovn-kubernetes | |
openshift-service-ca |
kubelet |
service-ca-64446499c7-ghfpb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97de153ac76971fa69d4af7166c63416fbe37d759deb7833340c1c39d418b745" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-bcf7659b to 1 from 0 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5d9b59775c |
SuccessfulDelete |
Deleted pod: controller-manager-5d9b59775c-x2cz2 | |
openshift-controller-manager |
default-scheduler |
controller-manager-6687f866cc-2f4dq |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-6687f866cc-2f4dq to master-2 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/kube-control-plane-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
etcd-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Available changed from False to True ("All is well") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-etcd because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/audit -n openshift-apiserver: namespaces "openshift-apiserver" not found | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
NamespaceUpdated |
Updated Namespace/openshift-etcd because it changed | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found" to "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nAuditPolicyDegraded: namespaces \"openshift-apiserver\" not found" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.openshift-global-ca.configmap | |
(x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-766d6b44f6-gtvcp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-55957b47d5-vtkr6_69ce7d56-9fde-47ba-99c9-37dc510b2659 became leader | |
(x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
RequiredInstallerResourcesMissing |
configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0 |
openshift-controller-manager |
default-scheduler |
controller-manager-bcf7659b-pckjm |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded changed from Unknown to False ("All is well") | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded set to Unknown (""),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"feature-gates" "4.18.25"} {"operator" "4.18.25"}] | |
(x2) | openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "operator" changed from "" to "4.18.25" |
(x2) | openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "feature-gates" changed from "" to "4.18.25" |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
ConfigOperatorStatusChanged |
Operator conditions defaulted: [{OperatorAvailable True 2025-10-14 13:08:14 +0000 UTC AsExpected } {OperatorProgressing False 2025-10-14 13:08:14 +0000 UTC AsExpected } {OperatorUpgradeable True 2025-10-14 13:08:14 +0000 UTC AsExpected }] | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-controller-manager |
replicaset-controller |
controller-manager-bcf7659b |
SuccessfulCreate |
Created pod: controller-manager-bcf7659b-pckjm | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
TargetConfigDeleted |
Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-5d9b59775c to 0 from 1 | |
(x4) | openshift-controller-manager |
kubelet |
controller-manager-5d9b59775c-x2cz2 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver namespace | |
(x6) | openshift-ingress-operator |
kubelet |
ingress-operator-766ddf4575-xhdjt |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
(x4) | openshift-controller-manager |
kubelet |
controller-manager-5d9b59775c-x2cz2 |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
(x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-7866c9bdf4-d4dlj |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
(x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-7866c9bdf4-d4dlj |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-controller-manager because it changed | |
(x6) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6c8fbf4498-kcckh |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "cluster-baremetal-webhook-server-cert" not found |
(x6) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-6c8fbf4498-kcckh |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : secret "cluster-baremetal-operator-tls" not found |
(x6) | openshift-cluster-machine-approver |
kubelet |
machine-approver-7876f99457-kpq7g |
FailedMount |
MountVolume.SetUp failed for volume "machine-approver-tls" : secret "machine-approver-tls" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-apiserver because it was missing | |
(x6) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-84f9cbd5d9-n87md |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : secret "control-plane-machine-set-operator-tls" not found |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " | |
(x6) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7ff449c7c5-nmpfk |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "cluster-autoscaler-operator-cert" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
NamespaceCreated |
Created Namespace/openshift-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt | |
(x6) | openshift-dns-operator |
kubelet |
dns-operator-7769d9677-nh2qc |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
(x6) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-5cf49b6487-4cf2d |
FailedMount |
MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : secret "cloud-credential-operator-serving-cert" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing | |
(x6) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-6b8674d7ff-gspqw |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found |
(x6) | openshift-machine-api |
kubelet |
machine-api-operator-9dbb96f7-s66vj |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : secret "machine-api-operator-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/aggregator-client-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well",Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nAuditPolicyDegraded: namespaces \"openshift-apiserver\" not found" to "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-apiserver because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-bcf7659b-pckjm |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-bcf7659b-pckjm to master-1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/: configmaps "loadbalancer-serving-ca" already exists | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ServiceCreated |
Created Service/api -n openshift-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-ca-bundle -n openshift-config because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-InternalLoadBalancerServing-certrotationcontroller |
kube-apiserver-operator |
RotationError |
configmaps "loadbalancer-serving-ca" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveServiceCAConfigMap |
observed change in config |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing | |
(x4) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing | |
(x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-dcfdffd74-ckmcc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b9e086347802546d8040d17296f434edf088305103b874c900beee3a3575c34" |
assisted-installer |
kubelet |
master-2-debug-xl9pk |
Started |
Started container container-00 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-config because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceUpdated |
Updated Service/etcd -n openshift-etcd because it changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/etcd -n openshift-etcd-operator because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/etcd-minimal -n openshift-etcd-operator because it was missing | |
assisted-installer |
kubelet |
master-2-debug-xl9pk |
Created |
Created container: container-00 | |
assisted-installer |
kubelet |
master-2-debug-xl9pk |
Pulled |
Successfully pulled image "registry.redhat.io/rhel9/support-tools" in 6.56s (6.56s including waiting). Image size: 376913914 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: secret doesn't exist | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist | |
(x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-env-var-controller |
etcd-operator |
EnvVarControllerUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ Â Â "extendedArguments": map[string]any{"cluster-cidr": []any{string("10.128.0.0/14")}, "cluster-name": []any{string("ocp-znddg")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "service-cluster-ip-range": []any{string("172.30.0.0/16")}}, Â Â "featureGates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, +Â "serviceServingCert": map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), +Â }, Â Â "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")}, Â Â } |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-service-ca |
kubelet |
service-ca-64446499c7-ghfpb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97de153ac76971fa69d4af7166c63416fbe37d759deb7833340c1c39d418b745" in 5.153s (5.153s including waiting). Image size: 501585296 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-service-ca |
kubelet |
service-ca-64446499c7-ghfpb |
Created |
Created container: service-ca-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing | |
(x2) | openshift-apiserver |
controllermanager |
openshift-apiserver-pdb |
NoPods |
No matching pods found |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-64446499c7-ghfpb_cd332072-208d-4efe-98ca-265ec9319de0 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-ControlPlaneNodeAdminClient-certrotationcontroller |
kube-apiserver-operator |
RotationError |
configmaps "kube-control-plane-signer-ca" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-CheckEndpointsClient-certrotationcontroller |
kube-apiserver-operator |
RotationError |
configmaps "kube-control-plane-signer-ca" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
TargetUpdateRequired |
"etcd-peer-master-1" in "openshift-etcd" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/openshift-apiserver-pdb -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-service-ca |
kubelet |
service-ca-64446499c7-ghfpb |
Started |
Started container service-ca-controller | |
(x4) | openshift-controller-manager |
kubelet |
controller-manager-bcf7659b-pckjm |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.18.25"}] | |
(x3) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/: configmaps "kube-control-plane-signer-ca" already exists |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceCreated |
Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated") | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorVersionChanged |
clusteroperator/service-ca version "operator" changed from "" to "4.18.25" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
SecretCreated |
Created Secret/etcd-peer-master-1 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found" to "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-dcfdffd74-ckmcc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b9e086347802546d8040d17296f434edf088305103b874c900beee3a3575c34" in 3.216s (3.216s including waiting). Image size: 497656412 bytes. | |
assisted-installer |
kubelet |
master-1-debug-qq2pg |
Created |
Created container: container-00 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing | |
assisted-installer |
kubelet |
master-1-debug-qq2pg |
Pulled |
Successfully pulled image "registry.redhat.io/rhel9/support-tools" in 10.35s (10.35s including waiting). Image size: 376913914 bytes. | |
assisted-installer |
kubelet |
master-1-debug-qq2pg |
Started |
Started container container-00 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
TargetUpdateRequired |
"etcd-serving-master-1" in "openshift-etcd" requires a new target cert/key pair: secret doesn't exist | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-77b56b6f4f-prtfl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:208d81ddcca0864f3a225e11a2fdcf7c67d32bae142bd9a9d154a76cffea08e7" in 7.654s (7.654s including waiting). Image size: 504201850 bytes. | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.18.25" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.18.25" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: status.versions changed from [] to [{"operator" "4.18.25"} {"csi-snapshot-controller" "4.18.25"}] | |
(x5) | openshift-controller-manager |
kubelet |
controller-manager-6687f866cc-2f4dq |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-ddd7d64cd-5s4kt |
Started |
Started container snapshot-controller | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-ddd7d64cd-5s4kt |
Created |
Created container: snapshot-controller | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-apiserver because it was missing | |
openshift-authentication-operator |
kubelet |
authentication-operator-66df44bc95-gldlr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5f27555b2adaa9cd82922dde7517c78eac05afdd090d572e62a9a425b42a7d" in 7.199s (7.199s including waiting). Image size: 506261367 bytes. | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-766d6b44f6-gtvcp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" in 6.186s (6.186s including waiting). Image size: 499422833 bytes. | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-ddd7d64cd-5s4kt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eeb8312c455dd728870a6332c7e36e9068f6031127ce3e481a9a1131da527265" in 9.49s (9.49s including waiting). Image size: 456743409 bytes. | |
openshift-apiserver |
replicaset-controller |
apiserver-5c6d48559d |
SuccessfulCreate |
Created pod: apiserver-5c6d48559d-v4vd9 | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-77b56b6f4f-prtfl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:208d81ddcca0864f3a225e11a2fdcf7c67d32bae142bd9a9d154a76cffea08e7" already present on machine | |
(x2) | openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorVersionChanged |
clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.18.25" |
openshift-kube-storage-version-migrator |
deployment-controller |
migrator |
ScalingReplicaSet |
Scaled up replica set migrator-d8c4d9469 to 1 | |
openshift-kube-storage-version-migrator |
replicaset-controller |
migrator-d8c4d9469 |
SuccessfulCreate |
Created pod: migrator-d8c4d9469-hbqzs | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator namespace | |
openshift-apiserver |
default-scheduler |
apiserver-5c6d48559d-44pcq |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-5c6d48559d-44pcq to master-2 | |
openshift-apiserver |
default-scheduler |
apiserver-5c6d48559d-v4vd9 |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-5c6d48559d-v4vd9 to master-1 | |
openshift-apiserver |
replicaset-controller |
apiserver-5c6d48559d |
SuccessfulCreate |
Created pod: apiserver-5c6d48559d-44pcq | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
SecretCreated |
Created Secret/etcd-serving-master-1 -n openshift-etcd because it was missing | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-5c6d48559d to 2 | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.") | |
openshift-kube-storage-version-migrator |
default-scheduler |
migrator-d8c4d9469-hbqzs |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator/migrator-d8c4d9469-hbqzs to master-2 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " to "All is well",Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" | |
(x2) | openshift-authentication-operator |
kubelet |
authentication-operator-66df44bc95-gldlr |
Created |
Created container: authentication-operator |
(x2) | openshift-authentication-operator |
kubelet |
authentication-operator-66df44bc95-gldlr |
Started |
Started container authentication-operator |
openshift-authentication-operator |
kubelet |
authentication-operator-66df44bc95-gldlr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5f27555b2adaa9cd82922dde7517c78eac05afdd090d572e62a9a425b42a7d" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-dcfdffd74-ckmcc_521c61d4-3945-4f89-a1a5-2143afec7faf became leader | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
NamespaceCreated |
Created Namespace/openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator |
kube-storage-version-migrator-operator |
DeploymentCreated |
Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.25"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-766d6b44f6-gtvcp_0bf26680-0786-4950-9b61-2cb3302b3cc3 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.18.25" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.18.25"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("All is well") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]",Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; ") | |
(x2) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-77b56b6f4f-prtfl |
Created |
Created container: cluster-olm-operator |
(x2) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-77b56b6f4f-prtfl |
Started |
Started container cluster-olm-operator |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-66df44bc95-gldlr_1e63ac5d-fc17-46a2-b100-0de5226291c0 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready" to "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" | |
openshift-cluster-olm-operator |
cluster-olm-operator |
cluster-olm-operator-lock |
LeaderElection |
cluster-olm-operator-77b56b6f4f-prtfl_fd373ac5-589a-484c-99bd-ffbce9fb7770 became leader | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
NamespaceCreated |
Created Namespace/openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
NamespaceCreated |
Created Namespace/openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-olm-operator"} {"operator.openshift.io" "olms" "" "cluster"}] to [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}],status.versions changed from [] to [{"operator" "4.18.25"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/openshift-kube-scheduler-guard-pdb -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodeObserved |
Observed new master node master-2 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodeObserved |
Observed new master node master-1 | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/clusterextensions.olm.operatorframework.io because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/clustercatalogs.olm.operatorframework.io because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ServiceAccountCreated |
Created ServiceAccount/operator-controller-controller-manager -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ServiceAccountCreated |
Created ServiceAccount/catalogd-controller-manager -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-config because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/catalogd-leader-election-role -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-leader-election-role -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/catalogd-manager-role -n openshift-config because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded changed from Unknown to False ("All is well") | |
(x2) | openshift-kube-scheduler |
controllermanager |
openshift-kube-scheduler-guard-pdb |
NoPods |
No matching pods found |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-catalogd namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operator-controller namespace | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
TargetUpdateRequired |
"etcd-serving-metrics-master-1" in "openshift-etcd" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-d8c4d9469-hbqzs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d6e7013acdcdd6199fa08c8e2b4059f547cc6f4b424399f9767497c7692f37" | |
openshift-kube-storage-version-migrator |
multus |
migrator-d8c4d9469-hbqzs |
AddedInterface |
Add eth0 [10.129.0.11/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "operator" changed from "" to "4.18.25" | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.25"}] | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from Unknown to False ("All is well"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ServiceCreated |
Created Service/apiserver -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
(x2) | openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorVersionChanged |
clusteroperator/olm version "operator" changed from "" to "4.18.25" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-d8c4d9469-hbqzs |
Started |
Started container graceful-termination | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-d8c4d9469-hbqzs |
Created |
Created container: graceful-termination | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-d8c4d9469-hbqzs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d6e7013acdcdd6199fa08c8e2b4059f547cc6f4b424399f9767497c7692f37" already present on machine | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-d8c4d9469-hbqzs |
Started |
Started container migrator | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-d8c4d9469-hbqzs |
Created |
Created container: migrator | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available changed from Unknown to False ("OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found") | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-editor-role because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-d8c4d9469-hbqzs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d6e7013acdcdd6199fa08c8e2b4059f547cc6f4b424399f9767497c7692f37" in 1.602s (1.602s including waiting). Image size: 436311051 bytes. | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-manager-role because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-scheduler because it changed | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "All is well" to "NodeInstallerProgressing: 2 nodes are at revision 0",Available message changed from "StaticPodsAvailable: 0 nodes are active; " to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-metrics-reader because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: ",Progressing changed from Unknown to False ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-viewer-role because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-editor-role because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-proxy-role because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"audit\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIAudiences |
service account issuer changed from to https://kubernetes.default.svc | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/catalogd-leader-election-rolebinding -n openshift-catalogd because it was missing | |
(x4) | openshift-apiserver |
kubelet |
apiserver-5c6d48559d-44pcq |
FailedMount |
MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-viewer-role because it was missing | |
openshift-apiserver |
replicaset-controller |
apiserver-5c6d48559d |
SuccessfulDelete |
Deleted pod: apiserver-5c6d48559d-44pcq | |
openshift-apiserver |
default-scheduler |
apiserver-6576f6bc9d-r2fhv |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | |
openshift-apiserver |
replicaset-controller |
apiserver-6576f6bc9d |
SuccessfulCreate |
Created pod: apiserver-6576f6bc9d-r2fhv | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"apiServerArguments\": map[string]any{\n+\u00a0\t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+\u00a0\t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\t\"etcd-servers\": []any{string(\"https://192.168.34.10:2379\")},\n+\u00a0\t\t\t\"tls-cipher-suites\": []any{\n+\u00a0\t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+\u00a0\t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t},\n\u00a0\u00a0)\n" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.34.10:2379 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-apiserver because it changed | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-5c6d48559d to 1 from 2 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6576f6bc9d to 1 from 0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceCreated |
Created Service/scheduler -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\"oauthConfig\": map[string]any{\n+\u00a0\t\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\t\"loginURL\": string(\"https://api.ocp.openstack.lab:6443\"),\n+\u00a0\t\t\t\"templates\": map[string]any{\n+\u00a0\t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tokenConfig\": map[string]any{\n+\u00a0\t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+\u00a0\t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n+\u00a0\t\t\"serverArguments\": map[string]any{\n+\u00a0\t\t\t\"audit-log-format\": []any{string(\"json\")},\n+\u00a0\t\t\t\"audit-log-maxbackup\": []any{string(\"10\")},\n+\u00a0\t\t\t\"audit-log-maxsize\": []any{string(\"100\")},\n+\u00a0\t\t\t\"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")},\n+\u00a0\t\t\t\"audit-policy-file\": []any{string(\"/var/run/configmaps/audit/audit.\"...)},\n+\u00a0\t\t},\n+\u00a0\t\t\"servingInfo\": map[string]any{\n+\u00a0\t\t\t\"cipherSuites\": []any{\n+\u00a0\t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+\u00a0\t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+\u00a0\t},\n\u00a0\u00a0)\n" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTemplates |
templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"] | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-r2fhv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" | |
openshift-apiserver |
multus |
apiserver-6576f6bc9d-r2fhv |
AddedInterface |
Add eth0 [10.129.0.12/23] from ovn-kubernetes | |
openshift-apiserver |
default-scheduler |
apiserver-6576f6bc9d-r2fhv |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-6576f6bc9d-r2fhv to master-2 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding -n openshift-config because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTokenConfig |
accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400) | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIServerURL |
loginURL changed from to https://api.ocp.openstack.lab:6443 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAuditProfile |
AuditProfile changed from '%!s(<nil>)' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]' | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing | |
(x6) | openshift-route-controller-manager |
kubelet |
route-controller-manager-7f89f9db8c-j4hd5 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
(x6) | openshift-route-controller-manager |
kubelet |
route-controller-manager-7f89f9db8c-dx7pm |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-network-diagnostics |
multus |
network-check-target-sndvg |
AddedInterface |
Add eth0 [10.128.0.4/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-proxy-rolebinding because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.34.10:2379,https://localhost:2379 | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-metrics-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "admission": map[string]any{ +Â "pluginConfig": map[string]any{ +Â "PodSecurity": map[string]any{"configuration": map[string]any{...}}, +Â "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, +Â "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, +Â }, +Â }, +Â "apiServerArguments": map[string]any{ +Â "api-audiences": []any{string("https://kubernetes.default.svc")}, +Â "etcd-servers": []any{string("https://192.168.34.10:2379"), string("https://localhost:2379")}, +Â "feature-gates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +Â }, +Â "goaway-chance": []any{string("0.001")}, +Â "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, +Â "send-retry-after-while-not-ready-once": []any{string("false")}, +Â "service-account-issuer": []any{string("https://kubernetes.default.svc")}, +Â "service-account-jwks-uri": []any{string("https://api.ocp.openstack.lab:6443/openid/v1/jwks")}, +Â }, +Â "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, +Â "servicesSubnet": string("172.30.0.0/16"), +Â "servingInfo": map[string]any{ +Â "bindAddress": string("0.0.0.0:6443"), +Â "bindNetwork": string("tcp4"), +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â "namedCertificates": []any{ +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-resou"...), +Â "keyFile": string("/etc/kubernetes/static-pod-resou"...), +Â }, +Â }, +Â }, Â Â } | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ServiceCreated |
Created Service/catalogd-service -n openshift-catalogd because it was missing | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
CSRCreated |
A csr "system:openshift:openshift-authenticator-4h722" is created for OpenShiftAuthenticatorCertRequester | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
CustomResourceDefinitionUpdated |
Updated CustomResourceDefinition.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration-v1beta3 because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration-v1beta3 because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationCreated |
Created MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-proxy-role because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing | |
(x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-config because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-basic -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-requests -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator |
authentication-operator |
CSRApproval |
The CSR "system:openshift:openshift-authenticator-4h722" has been approved | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/audit-errors -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
(x6) | openshift-controller-manager |
kubelet |
controller-manager-6687f866cc-2f4dq |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-network-diagnostics |
multus |
network-check-target-cb5bh |
AddedInterface |
Add eth0 [10.129.0.4/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ConfigMapCreated |
Created ConfigMap/catalogd-trusted-ca-bundle -n openshift-catalogd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/api-usage -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/2 pods have been updated to the latest generation and 0/2 pods are available" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-leader-election-rolebinding -n openshift-operator-controller because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver: namespaces "openshift-oauth-apiserver" not found | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-oauth-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreateFailed |
Failed to create Secret/: secrets "kube-scheduler-client-cert-key" already exists | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/podsecurity -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller |
etcd-operator |
SecretUpdated |
Updated Secret/etcd-all-certs -n openshift-etcd because it changed | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-oauth-apiserver namespace | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing | |
openshift-machine-api |
multus |
machine-api-operator-9dbb96f7-s66vj |
AddedInterface |
Add eth0 [10.128.0.8/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
machine-api-operator-9dbb96f7-s66vj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing | |
(x7) | openshift-machine-config-operator |
kubelet |
machine-config-operator-7b75469658-j2dbc |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : secret "mco-proxy-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-extended -n openshift-kube-apiserver because it was missing | |
openshift-image-registry |
multus |
cluster-image-registry-operator-6b8674d7ff-gspqw |
AddedInterface |
Add eth0 [10.128.0.19/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing | |
(x7) | openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-mgc7h |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6c8fbf4498-kcckh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ca84dadf413f08150ff8224f856cca12667b15168499013d0ff409dd323505d" | |
openshift-machine-api |
multus |
cluster-baremetal-operator-6c8fbf4498-kcckh |
AddedInterface |
Add eth0 [10.128.0.6/23] from ovn-kubernetes | |
(x7) | openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-9npgz |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-operator-controller because it was missing | |
openshift-machine-api |
multus |
control-plane-machine-set-operator-84f9cbd5d9-n87md |
AddedInterface |
Add eth0 [10.128.0.14/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-84f9cbd5d9-n87md |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:90c5ef075961ab090e3854d470bb6659737ee76ac96637e6d0dd62080e38e26e" | |
(x7) | openshift-marketplace |
kubelet |
marketplace-operator-c4f798dd4-djh96 |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found |
openshift-cloud-credential-operator |
multus |
cloud-credential-operator-5cf49b6487-4cf2d |
AddedInterface |
Add eth0 [10.128.0.11/23] from ovn-kubernetes | |
(x7) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-5b5dd85dcc-cxtgh |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
openshift-ingress-operator |
multus |
ingress-operator-766ddf4575-xhdjt |
AddedInterface |
Add eth0 [10.128.0.13/23] from ovn-kubernetes | |
(x7) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-798cc87f55-j2bjv |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found |
(x3) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed |
(x7) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-867f8475d9-fl56c |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found |
(x7) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-f966fb6f8-dwwm2 |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-6b8674d7ff-gspqw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c78b39674bd52b55017e08466030e88727f76514fbfa4e1918541697374881b3" | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-r2fhv |
Started |
Started container fix-audit-permissions | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-6687f866cc to 0 from 1 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-55bcd8787f to 1 from 0 | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.serving-cert.secret | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-7876f99457-kpq7g |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7ff449c7c5-nmpfk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6f547c00317910e3dd789bb16cc2a04e545f737570d484481408a4d3303d5732" | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7ff449c7c5-nmpfk |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7ff449c7c5-nmpfk |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7ff449c7c5-nmpfk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-7ff449c7c5-nmpfk |
AddedInterface |
Add eth0 [10.128.0.5/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-r2fhv |
Created |
Created container: fix-audit-permissions | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-7876f99457-kpq7g |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-7876f99457-kpq7g |
Started |
Started container kube-rbac-proxy | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6687f866cc |
SuccessfulDelete |
Deleted pod: controller-manager-6687f866cc-2f4dq | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-7876f99457-kpq7g |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2bffa697d52826e0ba76ddc30a78f44b274be22ee87af8d1a9d1c8337162be9" | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-7866c9bdf4-d4dlj |
AddedInterface |
Add eth0 [10.128.0.18/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-7866c9bdf4-d4dlj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca9272c8bbbde3ffdea2887c91dfb5ec4b09de7a8e2ae03aa5a47f56ff41e326" | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-r2fhv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" in 3.837s (3.837s including waiting). Image size: 582409947 bytes. | |
openshift-controller-manager |
default-scheduler |
controller-manager-55bcd8787f-4krnt |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager |
replicaset-controller |
controller-manager-55bcd8787f |
SuccessfulCreate |
Created pod: controller-manager-55bcd8787f-4krnt | |
openshift-dns-operator |
kubelet |
dns-operator-7769d9677-nh2qc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5ad9f2d4b8cf9205c5aa91b1eb9abafc2a638c7bd4b3f971f3d6b9a4df7318f" | |
openshift-dns-operator |
multus |
dns-operator-7769d9677-nh2qc |
AddedInterface |
Add eth0 [10.128.0.26/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found" | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationUpdated |
Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-5cf49b6487-4cf2d |
Started |
Started container kube-rbac-proxy | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-5cf49b6487-4cf2d |
Created |
Created container: kube-rbac-proxy | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-5cf49b6487-4cf2d |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-machine-api |
kubelet |
machine-api-operator-9dbb96f7-s66vj |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-9dbb96f7-s66vj |
Started |
Started container kube-rbac-proxy | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-5cf49b6487-4cf2d |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6458d944052d69ffeffc62813d3a5cc3344ce7091b6df0ebf54d73c861355b01" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-proxy-rolebinding because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"etcd-pod-0\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing | |
openshift-ingress-operator |
kubelet |
ingress-operator-766ddf4575-xhdjt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:805f1bf09553ecf2e9d735c881539c011947eee7bf4c977b074e2d0396b9d99a" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding because it was missing | |
openshift-machine-api |
kubelet |
machine-api-operator-9dbb96f7-s66vj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7015eb7a0d62afeba6f2f0dbd57a8ef24b8477b00f66a6789ccf97b78271e9a" | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-r2fhv |
Started |
Started container openshift-apiserver | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ConfigMapCreated |
Created ConfigMap/operator-controller-trusted-ca-bundle -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-operator-controller |
deployment-controller |
operator-controller-controller-manager |
ScalingReplicaSet |
Scaled up replica set operator-controller-controller-manager-668cb7cdc8 to 1 | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ServiceCreated |
Created Service/operator-controller-controller-manager-metrics-service -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
CatalogdDeploymentCatalogdControllerManager-catalogddeploymentcatalogdcontrollermanager-deployment-controller--catalogddeploymentcatalogdcontrollermanager |
cluster-olm-operator |
DeploymentCreated |
Created Deployment.apps/catalogd-controller-manager -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-cluster-olm-operator |
OperatorcontrollerDeploymentOperatorControllerControllerManager-operatorcontrollerdeploymentoperatorcontrollercontrollermanager-deployment-controller--operatorcontrollerdeploymentoperatorcontrollercontrollermanager |
cluster-olm-operator |
DeploymentCreated |
Created Deployment.apps/operator-controller-controller-manager -n openshift-operator-controller because it was missing | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-r2fhv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" | |
openshift-operator-controller |
replicaset-controller |
operator-controller-controller-manager-668cb7cdc8 |
SuccessfulCreate |
Created pod: operator-controller-controller-manager-668cb7cdc8-lwlfz | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-operator-controller |
default-scheduler |
operator-controller-controller-manager-668cb7cdc8-lwlfz |
Scheduled |
Successfully assigned openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-lwlfz to master-1 | |
openshift-controller-manager |
default-scheduler |
controller-manager-55bcd8787f-4krnt |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-55bcd8787f-4krnt to master-2 | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-r2fhv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-r2fhv |
Created |
Created container: openshift-apiserver | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing changed from Unknown to True ("CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment") | |
openshift-catalogd |
deployment-controller |
catalogd-controller-manager |
ScalingReplicaSet |
Scaled up replica set catalogd-controller-manager-596f9d8bbf to 1 | |
openshift-catalogd |
replicaset-controller |
catalogd-controller-manager-596f9d8bbf |
SuccessfulCreate |
Created pod: catalogd-controller-manager-596f9d8bbf-wn7c6 | |
openshift-catalogd |
default-scheduler |
catalogd-controller-manager-596f9d8bbf-wn7c6 |
Scheduled |
Successfully assigned openshift-catalogd/catalogd-controller-manager-596f9d8bbf-wn7c6 to master-1 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-1 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftAuthenticatorCertRequester is available | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-1 -n openshift-etcd because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-r2fhv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" in 2.304s (2.304s including waiting). Image size: 508004341 bytes. | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-authentication because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-authentication namespace | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
(x86) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMissing |
no observedConfig |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-1 -n openshift-etcd because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift namespace | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-r2fhv |
Created |
Created container: openshift-apiserver-check-endpoints | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-node namespace | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-r2fhv |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ServiceCreated |
Created Service/api -n openshift-oauth-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing | |
(x4) | openshift-catalogd |
kubelet |
catalogd-controller-manager-596f9d8bbf-wn7c6 |
FailedMount |
MountVolume.SetUp failed for volume "catalogserver-certs" : secret "catalogserver-cert" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-1 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-all-bundles-1 -n openshift-etcd because it was missing | |
openshift-apiserver |
replicaset-controller |
apiserver-5c6d48559d |
SuccessfulDelete |
Deleted pod: apiserver-5c6d48559d-v4vd9 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-5c6d48559d to 0 from 1 | |
openshift-apiserver |
default-scheduler |
apiserver-6576f6bc9d-xfzjr |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | |
(x6) | openshift-apiserver |
kubelet |
apiserver-5c6d48559d-v4vd9 |
FailedMount |
MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found |
openshift-apiserver |
replicaset-controller |
apiserver-6576f6bc9d |
SuccessfulCreate |
Created pod: apiserver-6576f6bc9d-xfzjr | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6576f6bc9d to 2 from 1 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-controller-manager because it was missing | |
(x44) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0 |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "InstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 2 triggered by "optional secret/serving-cert has been created" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-controller-manager: cause by changes in data.config.yaml | |
openshift-apiserver |
default-scheduler |
apiserver-6576f6bc9d-xfzjr |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-6576f6bc9d-xfzjr to master-1 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-1 -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]",Progressing changed from False to True ("NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 1" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/2 pods have been updated to the latest generation and 0/2 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/2 pods have been updated to the latest generation and 1/2 pods are available",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "master-1" from revision 0 to 1 because node master-1 static pod not found | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveWebhookTokenAuthenticator |
authentication-token webhook configuration status changed from false to true | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing | |
(x2) | openshift-oauth-apiserver |
controllermanager |
oauth-apiserver-pdb |
NoPods |
No matching pods found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}},   "apiServerArguments": map[string]any{   "api-audiences": []any{string("https://kubernetes.default.svc")}, + "authentication-token-webhook-config-file": []any{ + string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig"), + }, + "authentication-token-webhook-version": []any{string("v1")},   "etcd-servers": []any{string("https://192.168.34.10:2379"), string("https://localhost:2379")},   "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...},   ... // 5 identical entries   },   "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")},   "servicesSubnet": string("172.30.0.0/16"),   "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12"), ...},   } | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller |
authentication-operator |
SecretCreated |
Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/oauth-apiserver-pdb -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-1-master-1 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "APIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager: cause by changes in data.config.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-1 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6c8fbf4498-kcckh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ca84dadf413f08150ff8224f856cca12667b15168499013d0ff409dd323505d" in 11.84s (11.84s including waiting). Image size: 463860143 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-1 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing | |
(x7) | openshift-route-controller-manager |
kubelet |
route-controller-manager-7f89f9db8c-dx7pm |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-h7z5t | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-84f9cbd5d9-n87md |
Created |
Created container: control-plane-machine-set-operator | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7ff449c7c5-nmpfk |
Created |
Created container: cluster-autoscaler-operator | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-5cf49b6487-4cf2d |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6458d944052d69ffeffc62813d3a5cc3344ce7091b6df0ebf54d73c861355b01" in 14.125s (14.125s including waiting). Image size: 873399372 bytes. | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-5cf49b6487-4cf2d |
Created |
Created container: cloud-credential-operator | |
default |
machineapioperator |
machine-api |
Status upgrade |
Progressing towards operator: 4.18.25 | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-5cf49b6487-4cf2d |
Started |
Started container cloud-credential-operator | |
openshift-cluster-machine-approver |
master-1_16a5118e-4dd1-44ec-82c3-b21c5b10c1c9 |
cluster-machine-approver-leader |
LeaderElection |
master-1_16a5118e-4dd1-44ec-82c3-b21c5b10c1c9 became leader | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7ff449c7c5-nmpfk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6f547c00317910e3dd789bb16cc2a04e545f737570d484481408a4d3303d5732" in 14.171s (14.171s including waiting). Image size: 449415489 bytes. | |
openshift-machine-api |
cluster-autoscaler-operator-7ff449c7c5-nmpfk_e267be74-6a1d-4d33-bc12-9889a2870cea |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-7ff449c7c5-nmpfk_e267be74-6a1d-4d33-bc12-9889a2870cea became leader | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-7876f99457-kpq7g |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2bffa697d52826e0ba76ddc30a78f44b274be22ee87af8d1a9d1c8337162be9" in 14.215s (14.215s including waiting). Image size: 460276288 bytes. | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-7876f99457-kpq7g |
Created |
Created container: machine-approver-controller | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-7876f99457-kpq7g |
Started |
Started container machine-approver-controller | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6c8fbf4498-kcckh |
Created |
Created container: cluster-baremetal-operator | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6c8fbf4498-kcckh |
Started |
Started container cluster-baremetal-operator | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-7866c9bdf4-d4dlj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca9272c8bbbde3ffdea2887c91dfb5ec4b09de7a8e2ae03aa5a47f56ff41e326" in 14.165s (14.165s including waiting). Image size: 681716323 bytes. | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-7866c9bdf4-d4dlj |
Created |
Created container: cluster-node-tuning-operator | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-7866c9bdf4-d4dlj |
Started |
Started container cluster-node-tuning-operator | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-7866c9bdf4-d4dlj_4a36363d-d49a-4b53-91a4-c123536800a4 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-7866c9bdf4-d4dlj_4a36363d-d49a-4b53-91a4-c123536800a4 became leader | |
openshift-cluster-node-tuning-operator |
default-scheduler |
tuned-8487p |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-8487p to master-2 | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6c8fbf4498-kcckh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6c8fbf4498-kcckh |
Created |
Created container: baremetal-kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-6c8fbf4498-kcckh |
Started |
Started container baremetal-kube-rbac-proxy | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-6b8674d7ff-gspqw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c78b39674bd52b55017e08466030e88727f76514fbfa4e1918541697374881b3" in 14.207s (14.207s including waiting). Image size: 541801559 bytes. | |
openshift-cluster-node-tuning-operator |
default-scheduler |
tuned-h7z5t |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-h7z5t to master-1 | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-6b8674d7ff-gspqw |
Created |
Created container: cluster-image-registry-operator | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-8487p | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-6b8674d7ff-gspqw_573f5c1c-0aed-4d15-8cf6-0d46436d972a became leader | |
openshift-machine-api |
kubelet |
machine-api-operator-9dbb96f7-s66vj |
Started |
Started container machine-api-operator | |
openshift-dns-operator |
kubelet |
dns-operator-7769d9677-nh2qc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5ad9f2d4b8cf9205c5aa91b1eb9abafc2a638c7bd4b3f971f3d6b9a4df7318f" in 14.163s (14.163s including waiting). Image size: 461301475 bytes. | |
openshift-dns-operator |
kubelet |
dns-operator-7769d9677-nh2qc |
Created |
Created container: dns-operator | |
openshift-dns-operator |
kubelet |
dns-operator-7769d9677-nh2qc |
Started |
Started container dns-operator | |
openshift-dns-operator |
kubelet |
dns-operator-7769d9677-nh2qc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-dns-operator |
kubelet |
dns-operator-7769d9677-nh2qc |
Created |
Created container: kube-rbac-proxy | |
openshift-dns-operator |
kubelet |
dns-operator-7769d9677-nh2qc |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-9dbb96f7-s66vj |
Created |
Created container: machine-api-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-9dbb96f7-s66vj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7015eb7a0d62afeba6f2f0dbd57a8ef24b8477b00f66a6789ccf97b78271e9a" in 14.189s (14.189s including waiting). Image size: 855233892 bytes. | |
openshift-ingress-operator |
kubelet |
ingress-operator-766ddf4575-xhdjt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:805f1bf09553ecf2e9d735c881539c011947eee7bf4c977b074e2d0396b9d99a" in 14.065s (14.065s including waiting). Image size: 504222816 bytes. | |
openshift-dns-operator |
cluster-dns-operator |
dns-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler |
kubelet |
installer-1-master-1 |
Started |
Started container installer | |
openshift-apiserver |
multus |
apiserver-6576f6bc9d-xfzjr |
AddedInterface |
Add eth0 [10.128.0.43/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-xfzjr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" | |
openshift-kube-scheduler |
kubelet |
installer-1-master-1 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-1-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler |
multus |
installer-1-master-1 |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-machine-api |
cluster-baremetal-operator-6c8fbf4498-kcckh_df342dbc-6714-4e98-b15d-64c85dac69f8 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-6c8fbf4498-kcckh_df342dbc-6714-4e98-b15d-64c85dac69f8 became leader | |
openshift-ingress-operator |
kubelet |
ingress-operator-766ddf4575-xhdjt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-ingress-operator |
kubelet |
ingress-operator-766ddf4575-xhdjt |
Created |
Created container: kube-rbac-proxy | |
openshift-ingress-operator |
kubelet |
ingress-operator-766ddf4575-xhdjt |
Started |
Started container kube-rbac-proxy | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-6b8674d7ff-gspqw |
Started |
Started container cluster-image-registry-operator | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd |
kubelet |
installer-1-master-1 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-1-master-1 |
Created |
Created container: installer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler: cause by changes in data.pod.yaml | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
openshift-etcd |
kubelet |
installer-1-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
multus |
installer-1-master-1 |
AddedInterface |
Add eth0 [10.128.0.45/23] from ovn-kubernetes | |
openshift-machine-api |
control-plane-machine-set-operator-84f9cbd5d9-n87md_74b371cf-8bbd-43e8-8c0d-5c08212e5f2e |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-84f9cbd5d9-n87md_74b371cf-8bbd-43e8-8c0d-5c08212e5f2e became leader | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-84f9cbd5d9-n87md |
Started |
Started container control-plane-machine-set-operator | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-7ff449c7c5-nmpfk |
Started |
Started container cluster-autoscaler-operator | |
openshift-cluster-version |
kubelet |
cluster-version-operator-55bd67947c-872k9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:ba6f0f2eca65cd386a5109ddbbdb3bab9bb9801e32de56ef34f80e634a7787be" | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-84f9cbd5d9-n87md |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:90c5ef075961ab090e3854d470bb6659737ee76ac96637e6d0dd62080e38e26e" in 14.253s (14.253s including waiting). Image size: 463718256 bytes. | |
openshift-ingress |
default-scheduler |
router-default-5ddb89f76-887cs |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-dns |
default-scheduler |
node-resolver-6rrjr |
Scheduled |
Successfully assigned openshift-dns/node-resolver-6rrjr to master-2 | |
openshift-ingress |
replicaset-controller |
router-default-5ddb89f76 |
SuccessfulCreate |
Created pod: router-default-5ddb89f76-887cs | |
openshift-ingress |
deployment-controller |
router-default |
ScalingReplicaSet |
Scaled up replica set router-default-5ddb89f76 to 2 | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-h7z5t |
Started |
Started container tuned | |
openshift-ingress |
default-scheduler |
router-default-5ddb89f76-xf924 |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-zbv7v | |
openshift-dns |
default-scheduler |
dns-default-zbv7v |
Scheduled |
Successfully assigned openshift-dns/dns-default-zbv7v to master-1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress namespace | |
openshift-ingress-operator |
ingress_controller |
default |
Admitted |
ingresscontroller passed validation | |
openshift-ingress-operator |
certificate_controller |
router-ca |
CreatedWildcardCACert |
Created a default wildcard CA certificate | |
openshift-dns |
kubelet |
node-resolver-6rrjr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" | |
(x61) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing | |
openshift-dns |
default-scheduler |
node-resolver-lhshc |
Scheduled |
Successfully assigned openshift-dns/node-resolver-lhshc to master-1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-dns namespace | |
openshift-dns |
kubelet |
node-resolver-lhshc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a" already present on machine | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-8487p |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca9272c8bbbde3ffdea2887c91dfb5ec4b09de7a8e2ae03aa5a47f56ff41e326" | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-lhshc | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-6rrjr | |
openshift-dns |
default-scheduler |
dns-default-pbtld |
Scheduled |
Successfully assigned openshift-dns/dns-default-pbtld to master-2 | |
openshift-dns |
multus |
dns-default-pbtld |
AddedInterface |
Add eth0 [10.129.0.14/23] from ovn-kubernetes | |
openshift-dns |
kubelet |
dns-default-zbv7v |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:def4bc41ba62687d8c9a68b6f74c39240f651ec7a039a78a6535233581f430a7" | |
openshift-dns |
multus |
dns-default-zbv7v |
AddedInterface |
Add eth0 [10.128.0.46/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-h7z5t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca9272c8bbbde3ffdea2887c91dfb5ec4b09de7a8e2ae03aa5a47f56ff41e326" already present on machine | |
openshift-dns |
kubelet |
dns-default-pbtld |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:def4bc41ba62687d8c9a68b6f74c39240f651ec7a039a78a6535233581f430a7" | |
openshift-ingress |
replicaset-controller |
router-default-5ddb89f76 |
SuccessfulCreate |
Created pod: router-default-5ddb89f76-xf924 | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-h7z5t |
Created |
Created container: tuned | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-pbtld | |
openshift-cluster-version |
kubelet |
cluster-version-operator-55bd67947c-872k9 |
Created |
Created container: cluster-version-operator | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-c57444595 |
SuccessfulCreate |
Created pod: apiserver-c57444595-zs4m8 | |
openshift-config-managed |
certificate_publisher_controller |
router-certs |
PublishedRouterCertificates |
Published router certificates | |
openshift-cluster-version |
kubelet |
cluster-version-operator-55bd67947c-872k9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:ba6f0f2eca65cd386a5109ddbbdb3bab9bb9801e32de56ef34f80e634a7787be" in 2.254s (2.254s including waiting). Image size: 511020601 bytes. | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-c57444595-zs4m8 |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-c57444595-zs4m8 to master-1 | |
openshift-config-managed |
certificate_publisher_controller |
default-ingress-cert |
PublishedRouterCA |
Published "default-ingress-cert" in "openshift-config-managed" | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-c57444595-mj7cx |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-c57444595-mj7cx to master-2 | |
openshift-dns |
kubelet |
node-resolver-lhshc |
Started |
Started container dns-node-resolver | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-dns |
kubelet |
node-resolver-lhshc |
Created |
Created container: dns-node-resolver | |
openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing | |
openshift-dns |
kubelet |
node-resolver-6rrjr |
Started |
Started container dns-node-resolver | |
openshift-cluster-version |
kubelet |
cluster-version-operator-55bd67947c-872k9 |
Started |
Started container cluster-version-operator | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-1" from revision 0 to 1 because node master-1 static pod not found | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-c57444595 to 2 | |
openshift-ingress-operator |
certificate_controller |
default |
CreatedDefaultCertificate |
Created default wildcard certificate "router-certs-default" | |
openshift-dns |
kubelet |
node-resolver-6rrjr |
Created |
Created container: dns-node-resolver | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-c57444595 |
SuccessfulCreate |
Created pod: apiserver-c57444595-mj7cx | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]",Progressing changed from False to True ("NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 1" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-zs4m8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-mj7cx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" | |
openshift-oauth-apiserver |
multus |
apiserver-c57444595-mj7cx |
AddedInterface |
Add eth0 [10.129.0.15/23] from ovn-kubernetes | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ServiceCreated |
Created Service/oauth-openshift -n openshift-authentication because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-oauth-apiserver |
multus |
apiserver-c57444595-zs4m8 |
AddedInterface |
Add eth0 [10.128.0.47/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 2 triggered by "optional secret/serving-cert has been created" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-samples-operator |
deployment-controller |
cluster-samples-operator |
ScalingReplicaSet |
Scaled up replica set cluster-samples-operator-75f9c7d795 to 1 | |
openshift-cluster-samples-operator |
replicaset-controller |
cluster-samples-operator-75f9c7d795 |
SuccessfulCreate |
Created pod: cluster-samples-operator-75f9c7d795-v2gmx | |
openshift-cluster-samples-operator |
default-scheduler |
cluster-samples-operator-75f9c7d795-v2gmx |
Scheduled |
Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-75f9c7d795-v2gmx to master-2 | |
openshift-dns |
kubelet |
dns-default-zbv7v |
Started |
Started container kube-rbac-proxy | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-xfzjr |
Started |
Started container fix-audit-permissions | |
openshift-dns |
kubelet |
dns-default-zbv7v |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:def4bc41ba62687d8c9a68b6f74c39240f651ec7a039a78a6535233581f430a7" in 3.67s (3.67s including waiting). Image size: 477215701 bytes. | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-dns |
kubelet |
dns-default-zbv7v |
Created |
Created container: dns | |
openshift-dns |
kubelet |
dns-default-zbv7v |
Created |
Created container: kube-rbac-proxy | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7f89f9db8c |
SuccessfulDelete |
Deleted pod: route-controller-manager-7f89f9db8c-dx7pm | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6f6c689d49-xd4xv |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6f6c689d49 |
SuccessfulCreate |
Created pod: route-controller-manager-6f6c689d49-xd4xv | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-xfzjr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" in 5.082s (5.082s including waiting). Image size: 582409947 bytes. | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-xfzjr |
Created |
Created container: fix-audit-permissions | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-authentication because it was missing | |
openshift-dns |
kubelet |
dns-default-zbv7v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-dns |
kubelet |
dns-default-zbv7v |
Started |
Started container dns | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-1-master-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.serving-cert.secret | |
(x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-6f6c689d49 to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-7f89f9db8c to 1 from 2 | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-75f9c7d795-v2gmx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1efcdfb7891b86be5263a5d794628d16a717a5f8cb447168f40e18482eb29ab5" | |
openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-zs4m8 |
Started |
Started container fix-audit-permissions | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6f6c689d49-xd4xv |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-6f6c689d49-xd4xv to master-2 | |
openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-zs4m8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-8487p |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca9272c8bbbde3ffdea2887c91dfb5ec4b09de7a8e2ae03aa5a47f56ff41e326" in 5.171s (5.171s including waiting). Image size: 681716323 bytes. | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-8487p |
Created |
Created container: tuned | |
openshift-dns |
kubelet |
dns-default-pbtld |
Started |
Started container kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-pbtld |
Created |
Created container: kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-pbtld |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-dns |
kubelet |
dns-default-pbtld |
Started |
Started container dns | |
openshift-dns |
kubelet |
dns-default-pbtld |
Created |
Created container: dns | |
openshift-dns |
kubelet |
dns-default-pbtld |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:def4bc41ba62687d8c9a68b6f74c39240f651ec7a039a78a6535233581f430a7" in 4.457s (4.457s including waiting). Image size: 477215701 bytes. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-8487p |
Started |
Started container tuned | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-xfzjr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-cluster-samples-operator |
multus |
cluster-samples-operator-75f9c7d795-v2gmx |
AddedInterface |
Add eth0 [10.129.0.16/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-mj7cx |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-mj7cx |
Created |
Created container: fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-mj7cx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" in 3.04s (3.04s including waiting). Image size: 498371692 bytes. | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-xfzjr |
Started |
Started container openshift-apiserver | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-1 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-1 |
Created |
Created container: installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
multus |
installer-1-master-1 |
AddedInterface |
Add eth0 [10.128.0.48/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-xfzjr |
Created |
Created container: openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-xfzjr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-zs4m8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" in 2.914s (2.914s including waiting). Image size: 498371692 bytes. | |
openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-zs4m8 |
Created |
Created container: fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-mj7cx |
Created |
Created container: oauth-apiserver | |
openshift-kube-scheduler |
kubelet |
installer-1-master-1 |
Killing |
Stopping container installer | |
openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-mj7cx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-xfzjr |
Created |
Created container: openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-xfzjr |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 2" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-mj7cx |
Started |
Started container oauth-apiserver | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.ocp.openstack.lab:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\n\u00a0\u00a0\t\t\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n\u00a0\u00a0\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+\u00a0\t\t\"namedCertificates\": []any{\n+\u00a0\t\t\tmap[string]any{\n+\u00a0\t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+\u00a0\t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+\u00a0\t\t\t\t\"names\": []any{string(\"*.apps.ocp.openstack.lab\")},\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveRouterSecret |
namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.ocp.openstack.lab", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.ocp.openstack.lab", "names":[]interface {}{"*.apps.ocp.openstack.lab"}}} | |
openshift-authentication-operator |
cluster-authentication-operator-routercertsdomainvalidationcontroller |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-zs4m8 |
Created |
Created container: oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-zs4m8 |
Started |
Started container oauth-apiserver | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-cluster-samples-operator |
file-change-watchdog |
cluster-samples-operator |
FileChangeWatchdogStarted |
Started watching files for process cluster-samples-operator[2] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/2 pods have been updated to the latest generation and 0/2 pods are available" | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-75f9c7d795-v2gmx |
Created |
Created container: cluster-samples-operator-watch | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-75f9c7d795-v2gmx |
Started |
Started container cluster-samples-operator-watch | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-75f9c7d795-v2gmx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1efcdfb7891b86be5263a5d794628d16a717a5f8cb447168f40e18482eb29ab5" already present on machine | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-75f9c7d795-v2gmx |
Started |
Started container cluster-samples-operator | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-75f9c7d795-v2gmx |
Created |
Created container: cluster-samples-operator | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-75f9c7d795-v2gmx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1efcdfb7891b86be5263a5d794628d16a717a5f8cb447168f40e18482eb29ab5" in 2s (2s including waiting). Image size: 448523681 bytes. | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-xfzjr |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok livez check failed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager: cause by changes in data.pod.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-xfzjr |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-2-master-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-2-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler |
kubelet |
installer-2-master-1 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-2-master-1 |
Created |
Created container: installer | |
openshift-kube-scheduler |
multus |
installer-2-master-1 |
AddedInterface |
Add eth0 [10.128.0.49/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-trust-distribution-trustdistributioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.18.25" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-apiserver" changed from "" to "4.18.25" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.25"}] to [{"operator" "4.18.25"} {"oauth-apiserver" "4.18.25"}] | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.18.25"}] to [{"operator" "4.18.25"} {"openshift-apiserver" "4.18.25"}] | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/kube-controller-manager-pod has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 3" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-2-master-1 |
Killing |
Stopping container installer | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-798cc87f55-j2bjv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-867f8475d9-fl56c |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" | |
openshift-multus |
multus |
multus-admission-controller-77b66fddc8-mgc7h |
AddedInterface |
Add eth0 [10.128.0.35/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-798cc87f55-j2bjv |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-9npgz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20340db1108fda428a7abee6193330945c70ad69148f122a7f32a889047c8003" | |
openshift-multus |
multus |
multus-admission-controller-77b66fddc8-9npgz |
AddedInterface |
Add eth0 [10.128.0.27/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-5b5dd85dcc-cxtgh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:732db322c7ea7d239293fdd893e493775fd05ed4370bfe908c6995d4beabc0a4" | |
openshift-monitoring |
multus |
cluster-monitoring-operator-5b5dd85dcc-cxtgh |
AddedInterface |
Add eth0 [10.128.0.7/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-mgc7h |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20340db1108fda428a7abee6193330945c70ad69148f122a7f32a889047c8003" | |
openshift-operator-lifecycle-manager |
multus |
package-server-manager-798cc87f55-j2bjv |
AddedInterface |
Add eth0 [10.128.0.12/23] from ovn-kubernetes | |
openshift-machine-config-operator |
multus |
machine-config-operator-7b75469658-j2dbc |
AddedInterface |
Add eth0 [10.128.0.17/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
multus |
olm-operator-867f8475d9-fl56c |
AddedInterface |
Add eth0 [10.128.0.15/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-798cc87f55-j2bjv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" | |
openshift-machine-config-operator |
machine-config-operator |
master-1 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-798cc87f55-j2bjv |
Started |
Started container kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-f966fb6f8-dwwm2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" | |
openshift-operator-lifecycle-manager |
multus |
catalog-operator-f966fb6f8-dwwm2 |
AddedInterface |
Add eth0 [10.128.0.16/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
marketplace-operator-c4f798dd4-djh96 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c265fd635e36ef28c00f961a9969135e715f43af7f42455c9bde03a6b95ddc3e" | |
openshift-marketplace |
multus |
marketplace-operator-c4f798dd4-djh96 |
AddedInterface |
Add eth0 [10.128.0.25/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-7b75469658-j2dbc |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-7b75469658-j2dbc |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-7b75469658-j2dbc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-1 |
Killing |
Stopping container installer | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/master-user-data-managed -n openshift-machine-api because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 2" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-3-master-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-5b5dd85dcc-cxtgh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:732db322c7ea7d239293fdd893e493775fd05ed4370bfe908c6995d4beabc0a4" in 3.806s (3.806s including waiting). Image size: 477490934 bytes. | |
openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-9npgz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20340db1108fda428a7abee6193330945c70ad69148f122a7f32a889047c8003" in 3.823s (3.823s including waiting). Image size: 449613161 bytes. | |
openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-mgc7h |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20340db1108fda428a7abee6193330945c70ad69148f122a7f32a889047c8003" in 3.341s (3.341s including waiting). Image size: 449613161 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/mcn-guards because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/mcn-guards-binding because it was missing | |
openshift-marketplace |
kubelet |
marketplace-operator-c4f798dd4-djh96 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c265fd635e36ef28c00f961a9969135e715f43af7f42455c9bde03a6b95ddc3e" in 3.341s (3.341s including waiting). Image size: 451163388 bytes. | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-5b5dd85dcc-cxtgh |
Started |
Started container cluster-monitoring-operator | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-79d5f95f5c-bg9c4 |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-machine-config-operator |
default-scheduler |
machine-config-daemon-49h5v |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-49h5v to master-1 | |
openshift-kube-scheduler |
kubelet |
installer-3-master-1 |
Started |
Started container installer | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-j26vt |
Started |
Started container machine-config-daemon | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-master-0 |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-mmqs4" has been approved | |
openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-mgc7h |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-mgc7h |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-mgc7h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-mgc7h |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-mgc7h |
Created |
Created container: multus-admission-controller | |
openshift-kube-scheduler |
kubelet |
installer-3-master-1 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-3-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler |
multus |
installer-3-master-1 |
AddedInterface |
Add eth0 [10.128.0.50/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-49h5v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-49h5v |
Created |
Created container: machine-config-daemon | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" | |
openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-9npgz |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-9npgz |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-9npgz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-9npgz |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-9npgz |
Created |
Created container: multus-admission-controller | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-2-master-1 -n openshift-kube-controller-manager because it was missing | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-master-0 |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-gwkjb" has been approved | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-49h5v | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringClientCertRequester is available | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-j26vt | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-mmqs4" is created for OpenShiftMonitoringClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-gwkjb" is created for OpenShiftMonitoringTelemeterClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-j26vt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-5b5dd85dcc-cxtgh |
Created |
Created container: cluster-monitoring-operator | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
openshift-monitoring |
deployment-controller |
prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-admission-webhook-79d5f95f5c to 2 | |
(x2) | openshift-monitoring |
controllermanager |
prometheus-operator-admission-webhook |
NoPods |
No matching pods found |
openshift-monitoring |
replicaset-controller |
prometheus-operator-admission-webhook-79d5f95f5c |
SuccessfulCreate |
Created pod: prometheus-operator-admission-webhook-79d5f95f5c-btmxj | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-admission-webhook-79d5f95f5c |
SuccessfulCreate |
Created pod: prometheus-operator-admission-webhook-79d5f95f5c-bg9c4 | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-79d5f95f5c-btmxj |
FailedScheduling |
0/2 nodes are available: 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-j26vt |
Created |
Created container: machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-j26vt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876" already present on machine | |
openshift-machine-config-operator |
default-scheduler |
machine-config-daemon-j26vt |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-j26vt to master-2 | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-49h5v |
Started |
Started container machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-49h5v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-j26vt |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
master-1 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-49h5v |
Started |
Started container kube-rbac-proxy | |
openshift-kube-controller-manager |
multus |
installer-2-master-1 |
AddedInterface |
Add eth0 [10.128.0.51/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-1 |
Started |
Started container installer | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-j26vt |
Started |
Started container kube-rbac-proxy | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-1 |
Created |
Created container: installer | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-49h5v |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
master-1 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-puller-binding -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 2" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-puller -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/machine-configuration-guards because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/managed-bootimages-platform-check because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/custom-machine-config-pool-selector because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/machine-configuration-guards-binding because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/managed-bootimages-platform-check-binding because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-867f8475d9-fl56c |
Started |
Started container olm-operator | |
openshift-machine-config-operator |
deployment-controller |
machine-config-controller |
ScalingReplicaSet |
Scaled up replica set machine-config-controller-6dcc7bf8f6 to 1 | |
openshift-machine-config-operator |
default-scheduler |
machine-config-controller-6dcc7bf8f6-s7t2x |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-s7t2x to master-2 | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-f966fb6f8-dwwm2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" in 7.682s (7.682s including waiting). Image size: 855643597 bytes. | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-f966fb6f8-dwwm2 |
Created |
Created container: catalog-operator | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-f966fb6f8-dwwm2 |
Started |
Started container catalog-operator | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-867f8475d9-fl56c |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" in 7.826s (7.826s including waiting). Image size: 855643597 bytes. | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-867f8475d9-fl56c |
Created |
Created container: olm-operator | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-798cc87f55-j2bjv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" in 7.564s (7.564s including waiting). Image size: 855643597 bytes. | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/custom-machine-config-pool-selector-binding because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-798cc87f55-j2bjv |
Created |
Created container: package-server-manager | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-798cc87f55-j2bjv |
Started |
Started container package-server-manager | |
openshift-operator-lifecycle-manager |
package-server-manager-798cc87f55-j2bjv_8c5b7084-ac4a-4760-8358-a79f4fba8b60 |
packageserver-controller-lock |
LeaderElection |
package-server-manager-798cc87f55-j2bjv_8c5b7084-ac4a-4760-8358-a79f4fba8b60 became leader | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-controller-6dcc7bf8f6 |
SuccessfulCreate |
Created pod: machine-config-controller-6dcc7bf8f6-s7t2x | |
(x25) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerStuck |
unexpected addresses: 192.168.34.10 |
openshift-ingress |
default-scheduler |
router-default-5ddb89f76-887cs |
FailedScheduling |
0/2 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/2 nodes are available: 1 Preemption is not helpful for scheduling, 1 node(s) didn't have free ports for the requested pod ports. | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-79d5f95f5c-btmxj |
FailedScheduling |
0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/2 nodes are available: 1 Preemption is not helpful for scheduling, 1 node(s) didn't match pod anti-affinity rules. | |
openshift-marketplace |
default-scheduler |
certified-operators-kpbmd |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-kpbmd to master-2 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
RequirementsUnknown |
requirements not yet checked | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-6dcc7bf8f6-s7t2x |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-6dcc7bf8f6-s7t2x |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-79d5f95f5c-bg9c4 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-bg9c4 to master-1 | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-6dcc7bf8f6-s7t2x |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
master-1 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-6dcc7bf8f6-s7t2x |
Started |
Started container machine-config-controller | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-6dcc7bf8f6-s7t2x |
Created |
Created container: machine-config-controller | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-6dcc7bf8f6-s7t2x |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876" already present on machine | |
openshift-network-diagnostics |
default-scheduler |
network-check-source-967c7bb47-bzqnw |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-source-967c7bb47-bzqnw to master-2 | |
openshift-ingress |
default-scheduler |
router-default-5ddb89f76-xf924 |
Scheduled |
Successfully assigned openshift-ingress/router-default-5ddb89f76-xf924 to master-1 | |
openshift-ingress |
kubelet |
router-default-5ddb89f76-xf924 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:776b1203d0e4c0522ff38ffceeddfbad096e187b4d4c927f3ad89bac5f40d5c8" | |
openshift-machine-config-operator |
multus |
machine-config-controller-6dcc7bf8f6-s7t2x |
AddedInterface |
Add eth0 [10.129.0.18/23] from ovn-kubernetes | |
openshift-marketplace |
default-scheduler |
community-operators-cf69d |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-cf69d to master-2 | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-6f5778dccb-9sfms |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" | |
openshift-operator-lifecycle-manager |
deployment-controller |
packageserver |
ScalingReplicaSet |
Scaled up replica set packageserver-6f5778dccb to 2 | |
openshift-marketplace |
kubelet |
certified-operators-kpbmd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" | |
openshift-marketplace |
multus |
certified-operators-kpbmd |
AddedInterface |
Add eth0 [10.129.0.20/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
community-operators-cf69d |
AddedInterface |
Add eth0 [10.129.0.21/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-cf69d |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" | |
openshift-operator-lifecycle-manager |
default-scheduler |
packageserver-6f5778dccb-9sfms |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/packageserver-6f5778dccb-9sfms to master-2 | |
openshift-operator-lifecycle-manager |
default-scheduler |
packageserver-6f5778dccb-kwxxp |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/packageserver-6f5778dccb-kwxxp to master-1 | |
openshift-operator-lifecycle-manager |
replicaset-controller |
packageserver-6f5778dccb |
SuccessfulCreate |
Created pod: packageserver-6f5778dccb-9sfms | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 4" | |
openshift-network-diagnostics |
kubelet |
network-check-source-967c7bb47-bzqnw |
Started |
Started container check-endpoints | |
openshift-operator-lifecycle-manager |
replicaset-controller |
packageserver-6f5778dccb |
SuccessfulCreate |
Created pod: packageserver-6f5778dccb-kwxxp | |
openshift-kube-scheduler |
kubelet |
installer-3-master-1 |
Killing |
Stopping container installer | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
AllRequirementsMet |
all requirements found, attempting install | |
openshift-network-diagnostics |
kubelet |
network-check-source-967c7bb47-bzqnw |
Created |
Created container: check-endpoints | |
openshift-network-diagnostics |
kubelet |
network-check-source-967c7bb47-bzqnw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1656551c63dc1b09263ccc5fb52a13dff12d57e1c7510529789df1b41d253aa9" already present on machine | |
openshift-network-diagnostics |
multus |
network-check-source-967c7bb47-bzqnw |
AddedInterface |
Add eth0 [10.129.0.19/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
waiting for install components to report healthy | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-79d5f95f5c-bg9c4 |
AddedInterface |
Add eth0 [10.128.0.52/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-79d5f95f5c-bg9c4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d17032145778e4a4adaeb2bd2a4107c77dc2b0f600d7d704f50648b6198801a" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-lifecycle-manager |
multus |
packageserver-6f5778dccb-9sfms |
AddedInterface |
Add eth0 [10.129.0.22/23] from ovn-kubernetes | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-79d5f95f5c-btmxj |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-btmxj to master-2 | |
openshift-ingress |
default-scheduler |
router-default-5ddb89f76-887cs |
Scheduled |
Successfully assigned openshift-ingress/router-default-5ddb89f76-887cs to master-2 | |
openshift-ingress |
kubelet |
router-default-5ddb89f76-887cs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:776b1203d0e4c0522ff38ffceeddfbad096e187b4d4c927f3ad89bac5f40d5c8" | |
(x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallWaiting |
apiServices not installed |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreateFailed |
Failed to create Secret/service-account-private-key-3 -n openshift-kube-controller-manager: client rate limiter Wait returned an error: context canceled | |
openshift-marketplace |
kubelet |
redhat-marketplace-frksz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-6f5778dccb-kwxxp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-operator-lifecycle-manager |
multus |
packageserver-6f5778dccb-kwxxp |
AddedInterface |
Add eth0 [10.128.0.53/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
redhat-marketplace-frksz |
AddedInterface |
Add eth0 [10.129.0.23/23] from ovn-kubernetes | |
openshift-marketplace |
default-scheduler |
redhat-marketplace-frksz |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-frksz to master-2 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-server |
SuccessfulCreate |
Created pod: machine-config-server-zxkbj | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-568c655666-t6c8q |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97de153ac76971fa69d4af7166c63416fbe37d759deb7833340c1c39d418b745" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-6f5778dccb-kwxxp |
Created |
Created container: packageserver | |
(x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-568c655666-t6c8q |
Created |
Created container: service-ca-operator |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-5d85974df9-ppzvt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-4-master-1 -n openshift-kube-scheduler because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-79d5f95f5c-btmxj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d17032145778e4a4adaeb2bd2a4107c77dc2b0f600d7d704f50648b6198801a" | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-79d5f95f5c-btmxj |
AddedInterface |
Add eth0 [10.129.0.24/23] from ovn-kubernetes | |
(x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-5d85974df9-ppzvt |
Created |
Created container: kube-controller-manager-operator |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
default-scheduler |
machine-config-server-b6pv4 |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-server-b6pv4 to master-1 | |
openshift-machine-config-operator |
default-scheduler |
machine-config-server-zxkbj |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-server-zxkbj to master-2 | |
openshift-marketplace |
default-scheduler |
redhat-operators-xl9gv |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-xl9gv to master-2 | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-server |
SuccessfulCreate |
Created pod: machine-config-server-b6pv4 | |
openshift-kube-scheduler |
kubelet |
installer-4-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler |
multus |
installer-4-master-1 |
AddedInterface |
Add eth0 [10.128.0.54/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
redhat-operators-xl9gv |
AddedInterface |
Add eth0 [10.129.0.25/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-xl9gv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-5d85974df9-ppzvt_a54aef33-0c99-44a4-9956-9357c240b196 became leader | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-568c655666-t6c8q_0b856754-559d-44d7-8d7b-61cbccad8b44 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-etcd |
static-pod-installer |
installer-1-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 1 | |
(x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-568c655666-t6c8q |
Started |
Started container service-ca-operator |
openshift-machine-config-operator |
kubelet |
machine-config-server-b6pv4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-6f5778dccb-kwxxp |
Started |
Started container packageserver | |
openshift-ingress |
kubelet |
router-default-5ddb89f76-887cs |
Created |
Created container: router | |
openshift-ingress |
kubelet |
router-default-5ddb89f76-887cs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:776b1203d0e4c0522ff38ffceeddfbad096e187b4d4c927f3ad89bac5f40d5c8" in 2.215s (2.215s including waiting). Image size: 489230204 bytes. | |
openshift-multus |
multus |
network-metrics-daemon-b84p7 |
AddedInterface |
Add eth0 [10.129.0.3/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-server-zxkbj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876" already present on machine | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
master |
RenderedConfigGenerated |
rendered-master-1c6c5fbd3d52010f02d09a7fd9bcec0a successfully generated (release version: 4.18.25, controller version: 4929be38a15cf61a9f9ddeaf1ba89d185aa72611) | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/kube-controller-manager-pod has changed,required configmap/serviceaccount-ca has changed" | |
openshift-ingress |
kubelet |
router-default-5ddb89f76-887cs |
Started |
Started container router | |
openshift-machine-config-operator |
kubelet |
machine-config-server-zxkbj |
Created |
Created container: machine-config-server | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" | |
openshift-machine-config-operator |
kubelet |
machine-config-server-zxkbj |
Started |
Started container machine-config-server | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-79d5f95f5c-btmxj |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-79d5f95f5c-btmxj |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-79d5f95f5c-btmxj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d17032145778e4a4adaeb2bd2a4107c77dc2b0f600d7d704f50648b6198801a" in 1.81s (1.81s including waiting). Image size: 437614192 bytes. | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
worker |
RenderedConfigGenerated |
rendered-worker-315034d1be830a68a5e18c5c50146791 successfully generated (release version: 4.18.25, controller version: 4929be38a15cf61a9f9ddeaf1ba89d185aa72611) | |
openshift-multus |
multus |
network-metrics-daemon-8l654 |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes | |
(x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-5d85974df9-ppzvt |
Started |
Started container kube-controller-manager-operator |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-2 now has machineconfiguration.openshift.io/currentConfig=rendered-master-1c6c5fbd3d52010f02d09a7fd9bcec0a | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-2 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-1c6c5fbd3d52010f02d09a7fd9bcec0a | |
openshift-machine-config-operator |
kubelet |
machine-config-server-b6pv4 |
Created |
Created container: machine-config-server | |
(x2) | openshift-etcd |
controllermanager |
etcd-guard-pdb |
NoPods |
No matching pods found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/etcd-guard-pdb -n openshift-etcd because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-2 now has machineconfiguration.openshift.io/state=Done | |
(x50) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-4-master-1 |
Created |
Created container: installer | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-1 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-1c6c5fbd3d52010f02d09a7fd9bcec0a | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-1 now has machineconfiguration.openshift.io/currentConfig=rendered-master-1c6c5fbd3d52010f02d09a7fd9bcec0a | |
openshift-machine-config-operator |
kubelet |
machine-config-server-b6pv4 |
Started |
Started container machine-config-server | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-1 now has machineconfiguration.openshift.io/state=Done | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n openshift-machine-config-operator because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodCreated |
Created Pod/etcd-guard-master-1 -n openshift-etcd because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-4-master-1 |
Started |
Started container installer | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n default because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/kube-controller-manager-pod has changed,required configmap/serviceaccount-ca has changed" | |
openshift-cloud-controller-manager-operator |
master-2_928ed986-e609-444e-a115-c3c85eeb50cd |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-2_928ed986-e609-444e-a115-c3c85eeb50cd became leader | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-builder -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder-anyuid because it was missing | |
(x2) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config started a version change from [] to [{operator 4.18.25} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876}] |
openshift-marketplace |
kubelet |
certified-operators-kpbmd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" in 12.532s (12.532s including waiting). Image size: 855643597 bytes. | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
kubelet |
community-operators-cf69d |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" in 12.374s (12.374s including waiting). Image size: 855643597 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-kpbmd |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-xl9gv |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-xl9gv |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-xl9gv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" in 9.462s (9.462s including waiting). Image size: 855643597 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-kpbmd |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-cf69d |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-frksz |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-frksz |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-frksz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" in 10.901s (10.901s including waiting). Image size: 855643597 bytes. | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-6f5778dccb-9sfms |
Created |
Created container: packageserver | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-6f5778dccb-9sfms |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" in 11.781s (11.781s including waiting). Image size: 855643597 bytes. | |
openshift-marketplace |
kubelet |
community-operators-cf69d |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-cf69d |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-frksz |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-6f5778dccb-9sfms |
Started |
Started container packageserver | |
openshift-marketplace |
kubelet |
certified-operators-kpbmd |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-xl9gv |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
(x10) | openshift-ingress |
kubelet |
router-default-5ddb89f76-887cs |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 3" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodUpdated |
Updated Pod/etcd-guard-master-1 -n openshift-etcd because it changed | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-79d5f95f5c-bg9c4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d17032145778e4a4adaeb2bd2a4107c77dc2b0f600d7d704f50648b6198801a" in 14.191s (14.191s including waiting). Image size: 437614192 bytes. | |
(x11) | openshift-ingress |
kubelet |
router-default-5ddb89f76-887cs |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed |
openshift-kube-controller-manager |
kubelet |
installer-2-master-1 |
Killing |
Stopping container installer | |
openshift-ingress |
kubelet |
router-default-5ddb89f76-xf924 |
Started |
Started container router | |
openshift-ingress |
kubelet |
router-default-5ddb89f76-xf924 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:776b1203d0e4c0522ff38ffceeddfbad096e187b4d4c927f3ad89bac5f40d5c8" in 15.336s (15.336s including waiting). Image size: 489230204 bytes. | |
openshift-etcd |
kubelet |
etcd-guard-master-1 |
Started |
Started container guard | |
openshift-etcd |
kubelet |
etcd-guard-master-1 |
Created |
Created container: guard | |
openshift-etcd |
kubelet |
etcd-guard-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
multus |
etcd-guard-master-1 |
AddedInterface |
Add eth0 [10.128.0.55/23] from ovn-kubernetes | |
(x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-dcfdffd74-ckmcc |
Created |
Created container: kube-storage-version-migrator-operator |
(x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-dcfdffd74-ckmcc |
Started |
Started container kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-dcfdffd74-ckmcc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b9e086347802546d8040d17296f434edf088305103b874c900beee3a3575c34" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-79d5f95f5c-bg9c4 |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-79d5f95f5c-bg9c4 |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-ingress |
kubelet |
router-default-5ddb89f76-xf924 |
Created |
Created container: router | |
openshift-monitoring |
deployment-controller |
prometheus-operator |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-574d7f8db8 to 1 | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-574d7f8db8 |
SuccessfulCreate |
Created pod: prometheus-operator-574d7f8db8-gbr5b | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-3-master-1 -n openshift-kube-controller-manager because it was missing | |
openshift-monitoring |
default-scheduler |
prometheus-operator-574d7f8db8-gbr5b |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-574d7f8db8-gbr5b to master-2 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-dcfdffd74-ckmcc_f7c70dbb-ab69-45d3-bdce-3fde258d1508 became leader | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" in 13.209s (13.209s including waiting). Image size: 531186824 bytes. | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-574d7f8db8-gbr5b |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a666f70f1223d9d2e6cfda2fb89ae1646dc73b9d2e78f0d31074c3e7f723aeb" | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container setup | |
(x2) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config version changed from [] to [{operator 4.18.25} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876}] |
openshift-kube-controller-manager |
multus |
installer-3-master-1 |
AddedInterface |
Add eth0 [10.128.0.56/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: setup | |
openshift-monitoring |
multus |
prometheus-operator-574d7f8db8-gbr5b |
AddedInterface |
Add eth0 [10.129.0.26/23] from ovn-kubernetes | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-1_2bf4b117-106b-4da7-a833-dca9bca23203 became leader | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-1 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-1 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcd-ensure-env-vars | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-574d7f8db8-gbr5b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-sg92v |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
default-scheduler |
cni-sysctl-allowlist-ds-sg92v |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-sg92v to master-2 | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-sg92v |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-fmkcf |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-fmkcf |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-fmkcf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e" already present on machine | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-sg92v | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcd-resources-copy | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-multus |
default-scheduler |
cni-sysctl-allowlist-ds-fmkcf |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-fmkcf to master-1 | |
(x22) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMissing |
apiServerArguments.etcd-servers has less than three endpoints: [https://192.168.34.10:2379 https://localhost:2379] |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-fmkcf | |
openshift-monitoring |
kubelet |
prometheus-operator-574d7f8db8-gbr5b |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a666f70f1223d9d2e6cfda2fb89ae1646dc73b9d2e78f0d31074c3e7f723aeb" in 1.635s (1.635s including waiting). Image size: 454581458 bytes. | |
openshift-monitoring |
kubelet |
prometheus-operator-574d7f8db8-gbr5b |
Created |
Created container: prometheus-operator | |
openshift-cloud-controller-manager-operator |
master-2_e9ddee44-47ca-41ab-8110-9b757f52b412 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-2_e9ddee44-47ca-41ab-8110-9b757f52b412 became leader | |
openshift-monitoring |
kubelet |
prometheus-operator-574d7f8db8-gbr5b |
Started |
Started container prometheus-operator | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-sg92v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-574d7f8db8-gbr5b |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-operator-574d7f8db8-gbr5b |
Started |
Started container kube-rbac-proxy | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcd | |
openshift-insights |
kubelet |
insights-operator-7dcf5bd85b-chrmm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c3058c461907ec5ff06a628e935722d7ec8bf86fa90b95269372a6dc41444ce" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcd-metrics | |
(x2) | openshift-insights |
kubelet |
insights-operator-7dcf5bd85b-chrmm |
Created |
Created container: insights-operator |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcd-rev | |
(x2) | openshift-insights |
kubelet |
insights-operator-7dcf5bd85b-chrmm |
Started |
Started container insights-operator |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5745565d84-5l45t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f425875bda87dc167d613efc88c56256e48364b73174d1392f7d23301baec0b" already present on machine | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-fmkcf |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberAddAsLearner |
successfully added new member https://192.168.34.11:2380 | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-68f5d95b74-bqdtw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
(x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-68f5d95b74-bqdtw |
Started |
Started container kube-apiserver-operator |
(x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-68f5d95b74-bqdtw |
Created |
Created container: kube-apiserver-operator |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-sg92v |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
(x2) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5745565d84-5l45t |
Created |
Created container: openshift-controller-manager-operator |
(x2) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5745565d84-5l45t |
Started |
Started container openshift-controller-manager-operator |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-68f5d95b74-bqdtw_2b7637c2-bf16-4aa3-ae5e-e97ec6650c9b became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
default-scheduler |
kube-state-metrics-57fbd47578-96mh2 |
Scheduled |
Successfully assigned openshift-monitoring/kube-state-metrics-57fbd47578-96mh2 to master-2 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
default-scheduler |
node-exporter-p4nr9 |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-p4nr9 to master-1 | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-p4nr9 | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-jc698 | |
openshift-monitoring |
default-scheduler |
openshift-state-metrics-56d8dcb55c-h25c4 |
Scheduled |
Successfully assigned openshift-monitoring/openshift-state-metrics-56d8dcb55c-h25c4 to master-2 | |
openshift-monitoring |
replicaset-controller |
openshift-state-metrics-56d8dcb55c |
SuccessfulCreate |
Created pod: openshift-state-metrics-56d8dcb55c-h25c4 | |
openshift-monitoring |
default-scheduler |
node-exporter-jc698 |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-jc698 to master-2 | |
openshift-monitoring |
deployment-controller |
kube-state-metrics |
ScalingReplicaSet |
Scaled up replica set kube-state-metrics-57fbd47578 to 1 | |
openshift-monitoring |
replicaset-controller |
kube-state-metrics-57fbd47578 |
SuccessfulCreate |
Created pod: kube-state-metrics-57fbd47578-96mh2 | |
openshift-monitoring |
deployment-controller |
openshift-state-metrics |
ScalingReplicaSet |
Scaled up replica set openshift-state-metrics-56d8dcb55c to 1 | |
(x10) | openshift-ingress |
kubelet |
router-default-5ddb89f76-xf924 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 |
openshift-monitoring |
kubelet |
node-exporter-p4nr9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66366501aac86a6d898d235d0b96dbe7679a2e142e8c615524f0bdc3ddd68b21" | |
(x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-7d88655794-dbtvc |
Created |
Created container: openshift-apiserver-operator |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-7d88655794-dbtvc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ef76839c19a20a0e01cdd2b9fd53ae31937d6f478b2c2343679099985fe9e47" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
kubelet |
kube-state-metrics-57fbd47578-96mh2 |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-tls" : secret "kube-state-metrics-tls" not found | |
(x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-7d88655794-dbtvc |
Started |
Started container openshift-apiserver-operator |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-56d4b95494-7ff2l |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d8df789ec16971dc14423860f7b20b9ee27d926e4e5be632714cadc15e7f9b32" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing | |
(x2) | openshift-etcd-operator |
kubelet |
etcd-operator-6bddf7d79-dtp9l |
Created |
Created container: etcd-operator |
(x2) | openshift-etcd-operator |
kubelet |
etcd-operator-6bddf7d79-dtp9l |
Started |
Started container etcd-operator |
openshift-etcd-operator |
kubelet |
etcd-operator-6bddf7d79-dtp9l |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-jc698 |
FailedMount |
MountVolume.SetUp failed for volume "node-exporter-tls" : secret "node-exporter-tls" not found | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing | |
(x2) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-56d4b95494-7ff2l |
Created |
Created container: cluster-storage-operator |
(x2) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-56d4b95494-7ff2l |
Started |
Started container cluster-storage-operator |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-monitoring |
kubelet |
node-exporter-p4nr9 |
Started |
Started container init-textfile | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-2\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: 1 of 2 members are available, NAME-PENDING-192.168.34.11 has not started" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-2\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: 1 of 2 members are available, NAME-PENDING-192.168.34.11 has not started" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-2\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: 1 of 2 members are available, NAME-PENDING-192.168.34.11 has not started" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-2\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: 1 of 2 members are available, NAME-PENDING-192.168.34.11 has not started" | |
(x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-6bddf7d79-dtp9l_ef98bf28-b82e-41d8-940b-98eeb1950b50 became leader | |
openshift-monitoring |
kubelet |
node-exporter-jc698 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66366501aac86a6d898d235d0b96dbe7679a2e142e8c615524f0bdc3ddd68b21" | |
openshift-marketplace |
kubelet |
certified-operators-kpbmd |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 13.968s (13.968s including waiting). Image size: 1199160216 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-kpbmd |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-kpbmd |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-cf69d |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 13.929s (13.929s including waiting). Image size: 1181047702 bytes. | |
openshift-marketplace |
kubelet |
community-operators-cf69d |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-cf69d |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-frksz |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 13.883s (13.883s including waiting). Image size: 1057212814 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-frksz |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-frksz |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-xl9gv |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 13.938s (13.938s including waiting). Image size: 1629241735 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-xl9gv |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-xl9gv |
Started |
Started container extract-content | |
openshift-monitoring |
kubelet |
openshift-state-metrics-56d8dcb55c-h25c4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-7d88655794-dbtvc_b658d861-f2c8-4e16-bbd3-a210b990c86d became leader | |
openshift-monitoring |
kubelet |
node-exporter-p4nr9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66366501aac86a6d898d235d0b96dbe7679a2e142e8c615524f0bdc3ddd68b21" in 1.313s (1.313s including waiting). Image size: 410753681 bytes. | |
openshift-monitoring |
kubelet |
node-exporter-p4nr9 |
Created |
Created container: init-textfile | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-monitoring |
multus |
openshift-state-metrics-56d8dcb55c-h25c4 |
AddedInterface |
Add eth0 [10.129.0.27/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-56d4b95494-7ff2l_d3e2e08e-8c91-4398-9e0b-994159bdfbca became leader | |
openshift-monitoring |
multus |
kube-state-metrics-57fbd47578-96mh2 |
AddedInterface |
Add eth0 [10.129.0.28/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
kube-state-metrics-57fbd47578-96mh2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aba459a30191b49c89c71863fd4ec15776092b818c6f5fa44e233824dea4c6cf" | |
openshift-marketplace |
kubelet |
redhat-operators-xl9gv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-multus |
default-scheduler |
multus-admission-controller-7b6b7bb859-m8s2b |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-7b6b7bb859-m8s2b to master-1 | |
openshift-monitoring |
kubelet |
node-exporter-p4nr9 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-jc698 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66366501aac86a6d898d235d0b96dbe7679a2e142e8c615524f0bdc3ddd68b21" in 961ms (961ms including waiting). Image size: 410753681 bytes. | |
openshift-multus |
replicaset-controller |
multus-admission-controller-7b6b7bb859 |
SuccessfulCreate |
Created pod: multus-admission-controller-7b6b7bb859-m8s2b | |
openshift-monitoring |
kubelet |
node-exporter-p4nr9 |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-p4nr9 |
Created |
Created container: node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-p4nr9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66366501aac86a6d898d235d0b96dbe7679a2e142e8c615524f0bdc3ddd68b21" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-jc698 |
Created |
Created container: init-textfile | |
openshift-monitoring |
kubelet |
openshift-state-metrics-56d8dcb55c-h25c4 |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-56d8dcb55c-h25c4 |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-56d8dcb55c-h25c4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-56d8dcb55c-h25c4 |
Created |
Created container: kube-rbac-proxy-self | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-7b6b7bb859 to 1 | |
openshift-marketplace |
kubelet |
certified-operators-kpbmd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-2\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: 1 of 2 members are available, NAME-PENDING-192.168.34.11 has not started" to "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-2\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: 1 of 2 members are available, NAME-PENDING-192.168.34.11 has not started" | |
openshift-monitoring |
kubelet |
openshift-state-metrics-56d8dcb55c-h25c4 |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
node-exporter-jc698 |
Started |
Started container init-textfile | |
openshift-monitoring |
kubelet |
openshift-state-metrics-56d8dcb55c-h25c4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:982ec135c928d7c2904347f7727077c3d45b4c124557f6b3cb7dfca5ffa2e145" | |
openshift-marketplace |
kubelet |
community-operators-cf69d |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
redhat-marketplace-frksz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-monitoring |
kubelet |
node-exporter-p4nr9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-p4nr9 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
kube-state-metrics-57fbd47578-96mh2 |
Started |
Started container kube-rbac-proxy-main | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-marketplace |
kubelet |
redhat-operators-xl9gv |
Created |
Created container: registry-server | |
openshift-monitoring |
kubelet |
node-exporter-jc698 |
Started |
Started container kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-operators-xl9gv |
Started |
Started container registry-server | |
openshift-monitoring |
kubelet |
openshift-state-metrics-56d8dcb55c-h25c4 |
Started |
Started container openshift-state-metrics | |
openshift-marketplace |
kubelet |
redhat-marketplace-frksz |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-frksz |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-frksz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 1.019s (1.019s including waiting). Image size: 911296197 bytes. | |
openshift-monitoring |
kubelet |
openshift-state-metrics-56d8dcb55c-h25c4 |
Created |
Created container: openshift-state-metrics | |
openshift-monitoring |
kubelet |
node-exporter-jc698 |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-jc698 |
Created |
Created container: node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-jc698 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66366501aac86a6d898d235d0b96dbe7679a2e142e8c615524f0bdc3ddd68b21" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-cf69d |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-cf69d |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-cf69d |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 1.08s (1.08s including waiting). Image size: 911296197 bytes. | |
openshift-monitoring |
kubelet |
openshift-state-metrics-56d8dcb55c-h25c4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:982ec135c928d7c2904347f7727077c3d45b4c124557f6b3cb7dfca5ffa2e145" in 1.397s (1.397s including waiting). Image size: 425015802 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-xl9gv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 1.027s (1.027s including waiting). Image size: 911296197 bytes. | |
openshift-monitoring |
kubelet |
kube-state-metrics-57fbd47578-96mh2 |
Started |
Started container kube-state-metrics | |
openshift-monitoring |
kubelet |
kube-state-metrics-57fbd47578-96mh2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-kpbmd |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-kpbmd |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
certified-operators-kpbmd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 1.03s (1.03s including waiting). Image size: 911296197 bytes. | |
openshift-monitoring |
kubelet |
kube-state-metrics-57fbd47578-96mh2 |
Created |
Created container: kube-state-metrics | |
openshift-multus |
kubelet |
multus-admission-controller-7b6b7bb859-m8s2b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20340db1108fda428a7abee6193330945c70ad69148f122a7f32a889047c8003" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-57fbd47578-96mh2 |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
kube-state-metrics-57fbd47578-96mh2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-jc698 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-2 now has machineconfiguration.openshift.io/reason= | |
openshift-machine-config-operator |
machineconfigdaemon |
master-2 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-1c6c5fbd3d52010f02d09a7fd9bcec0a | |
openshift-machine-config-operator |
machineconfigdaemon |
master-2 |
NodeDone |
Setting node master-2, currentConfig rendered-master-1c6c5fbd3d52010f02d09a7fd9bcec0a to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
master-2 |
Uncordon |
Update completed for config rendered-master-1c6c5fbd3d52010f02d09a7fd9bcec0a and node has been uncordoned | |
openshift-monitoring |
kubelet |
node-exporter-jc698 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
kube-state-metrics-57fbd47578-96mh2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aba459a30191b49c89c71863fd4ec15776092b818c6f5fa44e233824dea4c6cf" in 1.653s (1.653s including waiting). Image size: 433592907 bytes. | |
openshift-multus |
multus |
multus-admission-controller-7b6b7bb859-m8s2b |
AddedInterface |
Add eth0 [10.128.0.57/23] from ovn-kubernetes | |
openshift-multus |
replicaset-controller |
multus-admission-controller-7b6b7bb859 |
SuccessfulCreate |
Created pod: multus-admission-controller-7b6b7bb859-vrzvk | |
openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-9npgz |
Killing |
Stopping container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-7b6b7bb859-m8s2b |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-7b6b7bb859-m8s2b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-7b6b7bb859-m8s2b |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-7b6b7bb859-m8s2b |
Created |
Created container: multus-admission-controller | |
openshift-multus |
default-scheduler |
multus-admission-controller-7b6b7bb859-vrzvk |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-7b6b7bb859-vrzvk to master-2 | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-77b66fddc8 to 1 from 2 | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-7b6b7bb859 to 2 from 1 | |
openshift-monitoring |
kubelet |
kube-state-metrics-57fbd47578-96mh2 |
Created |
Created container: kube-rbac-proxy-self | |
openshift-multus |
replicaset-controller |
multus-admission-controller-77b66fddc8 |
SuccessfulDelete |
Deleted pod: multus-admission-controller-77b66fddc8-9npgz | |
openshift-monitoring |
kubelet |
kube-state-metrics-57fbd47578-96mh2 |
Started |
Started container kube-rbac-proxy-self | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-2\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: 1 of 2 members are available, NAME-PENDING-192.168.34.11 has not started" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-2\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: 1 of 2 members are available, NAME-PENDING-192.168.34.11 has not started" | |
openshift-multus |
kubelet |
multus-admission-controller-7b6b7bb859-m8s2b |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-9npgz |
Killing |
Stopping container kube-rbac-proxy | |
openshift-machine-config-operator |
machineconfigdaemon |
master-1 |
Uncordon |
Update completed for config rendered-master-1c6c5fbd3d52010f02d09a7fd9bcec0a and node has been uncordoned | |
openshift-monitoring |
default-scheduler |
metrics-server-8475fbcb68-8dq9n |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-8475fbcb68-8dq9n to master-2 | |
openshift-monitoring |
default-scheduler |
metrics-server-8475fbcb68-p4n8s |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-8475fbcb68-p4n8s to master-1 | |
openshift-machine-config-operator |
machineconfigdaemon |
master-1 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-1c6c5fbd3d52010f02d09a7fd9bcec0a | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-1 now has machineconfiguration.openshift.io/reason= | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/metrics-server-2hutru8havafv -n openshift-monitoring because it was missing | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-8475fbcb68 to 2 | |
openshift-monitoring |
default-scheduler |
telemeter-client-56c4f9c4b6-s6gwn |
Scheduled |
Successfully assigned openshift-monitoring/telemeter-client-56c4f9c4b6-s6gwn to master-2 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/telemeter-trusted-ca-bundle-56c9b9fa8d9gs -n openshift-monitoring because it was missing | |
openshift-machine-config-operator |
machineconfigdaemon |
master-1 |
NodeDone |
Setting node master-1, currentConfig rendered-master-1c6c5fbd3d52010f02d09a7fd9bcec0a to Done | |
openshift-monitoring |
multus |
telemeter-client-56c4f9c4b6-s6gwn |
AddedInterface |
Add eth0 [10.129.0.30/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
telemeter-client-56c4f9c4b6-s6gwn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce0ff00581505232eae7c6725b65f09e2a81f94b2af66aa60af7a1e101a1a705" | |
openshift-monitoring |
replicaset-controller |
telemeter-client-56c4f9c4b6 |
SuccessfulCreate |
Created pod: telemeter-client-56c4f9c4b6-s6gwn | |
openshift-monitoring |
deployment-controller |
telemeter-client |
ScalingReplicaSet |
Scaled up replica set telemeter-client-56c4f9c4b6 to 1 | |
openshift-multus |
kubelet |
multus-admission-controller-7b6b7bb859-vrzvk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20340db1108fda428a7abee6193330945c70ad69148f122a7f32a889047c8003" | |
openshift-multus |
multus |
multus-admission-controller-7b6b7bb859-vrzvk |
AddedInterface |
Add eth0 [10.129.0.29/23] from ovn-kubernetes | |
openshift-monitoring |
replicaset-controller |
metrics-server-8475fbcb68 |
SuccessfulCreate |
Created pod: metrics-server-8475fbcb68-p4n8s | |
openshift-monitoring |
replicaset-controller |
metrics-server-8475fbcb68 |
SuccessfulCreate |
Created pod: metrics-server-8475fbcb68-8dq9n | |
openshift-monitoring |
multus |
metrics-server-8475fbcb68-8dq9n |
AddedInterface |
Add eth0 [10.129.0.31/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
metrics-server-8475fbcb68-p4n8s |
AddedInterface |
Add eth0 [10.128.0.58/23] from ovn-kubernetes | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/grpc-tls -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
metrics-server-8475fbcb68-8dq9n |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8586795f9801090b8f01a74743474c41b5987eefc3a9b2c58f937098a1704f" | |
openshift-monitoring |
kubelet |
metrics-server-8475fbcb68-p4n8s |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8586795f9801090b8f01a74743474c41b5987eefc3a9b2c58f937098a1704f" | |
openshift-multus |
replicaset-controller |
multus-admission-controller-77b66fddc8 |
SuccessfulDelete |
Deleted pod: multus-admission-controller-77b66fddc8-mgc7h | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-77b66fddc8 to 0 from 1 | |
openshift-multus |
kubelet |
multus-admission-controller-7b6b7bb859-vrzvk |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-7b6b7bb859-vrzvk |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-7b6b7bb859-vrzvk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20340db1108fda428a7abee6193330945c70ad69148f122a7f32a889047c8003" in 1.83s (1.83s including waiting). Image size: 449613161 bytes. | |
openshift-multus |
kubelet |
multus-admission-controller-7b6b7bb859-vrzvk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-7b6b7bb859-vrzvk |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-mgc7h |
Killing |
Stopping container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-77b66fddc8-mgc7h |
Killing |
Stopping container kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-operators-xl9gv |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
openshift-multus |
kubelet |
multus-admission-controller-7b6b7bb859-vrzvk |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-56c4f9c4b6-s6gwn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce0ff00581505232eae7c6725b65f09e2a81f94b2af66aa60af7a1e101a1a705" in 2.796s (2.796s including waiting). Image size: 473570649 bytes. | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-2\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: 1 of 2 members are available, NAME-PENDING-192.168.34.11 has not started" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-2\nStaticPodsDegraded: pod/etcd-master-1 container \"etcd\" is terminated: Error: {\"level\":\"warn\",\"ts\":\"2025-10-14T13:09:40.675824Z\",\"caller\":\"flags/flag.go:93\",\"msg\":\"unrecognized environment variable\",\"environment-variable\":\"ETCDCTL_ENDPOINTS=\"}\nStaticPodsDegraded: {\"level\":\"warn\",\"ts\":\"2025-10-14T13:09:45.678374Z\",\"logger\":\"etcd-client\",\"caller\":\"v3@v3.5.18/retry_interceptor.go:63\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc0003221e0/127.0.0.1:2379\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\\\"\"}\nStaticPodsDegraded: Error: context deadline exceeded\nStaticPodsDegraded: could not parse revision.json, falling back to WAL parsing. Err=open /var/lib/etcd/revision.json: no such file or directorycould not find local cluster id: couldn't find cluster id in WAL or revision: open /var/lib/etcd/member/wal: no such file or directory\nStaticPodsDegraded: open /var/lib/etcd/revision.json: no such file or directory\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: 1 of 2 members are available, NAME-PENDING-192.168.34.11 has not started" | |
openshift-monitoring |
kubelet |
telemeter-client-56c4f9c4b6-s6gwn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" | |
openshift-monitoring |
kubelet |
telemeter-client-56c4f9c4b6-s6gwn |
Created |
Created container: telemeter-client | |
openshift-monitoring |
kubelet |
metrics-server-8475fbcb68-p4n8s |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8586795f9801090b8f01a74743474c41b5987eefc3a9b2c58f937098a1704f" in 2.445s (2.445s including waiting). Image size: 464468268 bytes. | |
openshift-monitoring |
kubelet |
metrics-server-8475fbcb68-8dq9n |
Started |
Started container metrics-server | |
openshift-monitoring |
kubelet |
metrics-server-8475fbcb68-p4n8s |
Created |
Created container: metrics-server | |
openshift-monitoring |
kubelet |
telemeter-client-56c4f9c4b6-s6gwn |
Started |
Started container telemeter-client | |
openshift-monitoring |
kubelet |
metrics-server-8475fbcb68-8dq9n |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8586795f9801090b8f01a74743474c41b5987eefc3a9b2c58f937098a1704f" in 2.604s (2.604s including waiting). Image size: 464468268 bytes. | |
openshift-monitoring |
kubelet |
metrics-server-8475fbcb68-p4n8s |
Started |
Started container metrics-server | |
openshift-monitoring |
kubelet |
metrics-server-8475fbcb68-8dq9n |
Created |
Created container: metrics-server | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-55957b47d5-vtkr6_e3bef568-a973-4cd1-b53e-5fdcc33a7acc became leader | |
(x2) | openshift-config-operator |
kubelet |
openshift-config-operator-55957b47d5-vtkr6 |
Created |
Created container: openshift-config-operator |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-config-operator |
kubelet |
openshift-config-operator-55957b47d5-vtkr6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa10afc83b17b0d76fcff8963f51e62ae851f145cd6c27f61a0604e0c713fe3a" already present on machine | |
(x2) | openshift-config-operator |
kubelet |
openshift-config-operator-55957b47d5-vtkr6 |
Started |
Started container openshift-config-operator |
openshift-monitoring |
kubelet |
telemeter-client-56c4f9c4b6-s6gwn |
Created |
Created container: reload | |
openshift-monitoring |
kubelet |
telemeter-client-56c4f9c4b6-s6gwn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" in 1.508s (1.508s including waiting). Image size: 430951015 bytes. | |
openshift-monitoring |
kubelet |
telemeter-client-56c4f9c4b6-s6gwn |
Started |
Started container reload | |
openshift-monitoring |
kubelet |
telemeter-client-56c4f9c4b6-s6gwn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-56c4f9c4b6-s6gwn |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-56c4f9c4b6-s6gwn |
Started |
Started container kube-rbac-proxy | |
(x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-766d6b44f6-gtvcp |
Created |
Created container: kube-scheduler-operator-container |
(x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-766d6b44f6-gtvcp |
Started |
Started container kube-scheduler-operator-container |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-766d6b44f6-gtvcp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-766d6b44f6-gtvcp_d14459ce-6d3d-4fb7-8106-7f23392cb202 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
(x13) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMissing |
apiServerArguments.etcd-servers has less than three endpoints: [https://192.168.34.10:2379 https://localhost:2379] |
(x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-fmkcf |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-master-1 on node master-1, Missing operand on node master-2]" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.18.25"}] to [{"raw-internal" "4.18.25"} {"kube-scheduler" "1.31.13"} {"operator" "4.18.25"}] | |
(x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "operator" changed from "" to "4.18.25" |
(x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-sg92v |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
(x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.31.13" |
openshift-kube-scheduler |
static-pod-installer |
installer-4-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
(x5) | openshift-etcd |
kubelet |
etcd-guard-master-1 |
Unhealthy |
Readiness probe failed: Get "https://192.168.34.11:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) |
(x5) | openshift-etcd |
kubelet |
etcd-guard-master-1 |
ProbeError |
Readiness probe error: Get "https://192.168.34.11:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: |
openshift-kube-controller-manager |
static-pod-installer |
installer-3-master-1 |
StaticPodInstallerFailed |
Installing revision 3: configmaps "client-ca" not found | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: l\" ...\nNodeInstallerDegraded: I1014 13:10:03.558913 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/kube-controller-manager-pod/version\" ...\nNodeInstallerDegraded: I1014 13:10:03.559051 1 cmd.go:277] Creating directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/recycler-config\" ...\nNodeInstallerDegraded: I1014 13:10:03.559184 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/recycler-config/recycler-pod.yaml\" ...\nNodeInstallerDegraded: I1014 13:10:03.559351 1 cmd.go:277] Creating directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/service-ca\" ...\nNodeInstallerDegraded: I1014 13:10:03.559457 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/service-ca/ca-bundle.crt\" ...\nNodeInstallerDegraded: I1014 13:10:03.559604 1 cmd.go:277] Creating directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/serviceaccount-ca\" ...\nNodeInstallerDegraded: I1014 13:10:03.559719 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/serviceaccount-ca/ca-bundle.crt\" ...\nNodeInstallerDegraded: I1014 13:10:03.559861 1 cmd.go:221] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\" ...\nNodeInstallerDegraded: I1014 13:10:03.559953 1 cmd.go:229] Getting secrets ...\nNodeInstallerDegraded: I1014 13:10:03.757643 1 copy.go:32] Got secret openshift-kube-controller-manager/csr-signer\nNodeInstallerDegraded: I1014 13:10:03.956160 1 copy.go:32] Got secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key\nNodeInstallerDegraded: I1014 13:10:03.956539 1 cmd.go:242] Getting config maps ...\nNodeInstallerDegraded: I1014 13:10:04.156837 1 copy.go:60] Got configMap openshift-kube-controller-manager/aggregator-client-ca\nNodeInstallerDegraded: I1014 13:10:04.355849 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/client-ca: configmaps \"client-ca\" not found\nNodeInstallerDegraded: F1014 13:10:04.557705 1 cmd.go:109] failed to copy: configmaps \"client-ca\" not found\nNodeInstallerDegraded: " | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberPromote |
successfully promoted learner member https://192.168.34.11:2380 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
installer errors: installer: l" ... I1014 13:10:03.558913 1 cmd.go:629] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/kube-controller-manager-pod/version" ... I1014 13:10:03.559051 1 cmd.go:277] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/recycler-config" ... I1014 13:10:03.559184 1 cmd.go:629] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/recycler-config/recycler-pod.yaml" ... I1014 13:10:03.559351 1 cmd.go:277] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/service-ca" ... I1014 13:10:03.559457 1 cmd.go:629] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/service-ca/ca-bundle.crt" ... I1014 13:10:03.559604 1 cmd.go:277] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/serviceaccount-ca" ... I1014 13:10:03.559719 1 cmd.go:629] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/serviceaccount-ca/ca-bundle.crt" ... I1014 13:10:03.559861 1 cmd.go:221] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs" ... I1014 13:10:03.559953 1 cmd.go:229] Getting secrets ... I1014 13:10:03.757643 1 copy.go:32] Got secret openshift-kube-controller-manager/csr-signer I1014 13:10:03.956160 1 copy.go:32] Got secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key I1014 13:10:03.956539 1 cmd.go:242] Getting config maps ... I1014 13:10:04.156837 1 copy.go:60] Got configMap openshift-kube-controller-manager/aggregator-client-ca I1014 13:10:04.355849 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/client-ca: configmaps "client-ca" not found F1014 13:10:04.557705 1 cmd.go:109] failed to copy: configmaps "client-ca" not found | |
openshift-kube-scheduler |
multus |
openshift-kube-scheduler-guard-master-1 |
AddedInterface |
Add eth0 [10.128.0.59/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-master-1 on node master-1, Missing operand on node master-2]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-2" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/openshift-kube-scheduler-guard-master-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-master-1 |
Created |
Created container: guard | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-master-1 |
Started |
Started container guard | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodUpdated |
Updated Pod/openshift-kube-scheduler-guard-master-1 -n openshift-kube-scheduler because it changed | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" in 7.291s (7.291s including waiting). Image size: 945482213 bytes. | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n\u00a0\u00a0\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\t\"etcd-servers\": []any{\n\u00a0\u00a0\t\t\tstring(\"https://192.168.34.10:2379\"),\n+\u00a0\t\t\tstring(\"https://192.168.34.11:2379\"),\n\u00a0\u00a0\t\t},\n\u00a0\u00a0\t\t\"tls-cipher-suites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n\u00a0\u00a0\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n" | |
openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-mj7cx |
Killing |
Stopping container oauth-apiserver | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Created |
Created container: wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Created |
Created container: kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/etcd-endpoints has been created" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.5ca86b2f73fec16a | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 1, desired generation is 2.") | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.34.10:2379,https://192.168.34.11:2379 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-96c4c446c to 1 from 0 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-c57444595 to 1 from 2 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-96c4c446c |
SuccessfulCreate |
Created pod: apiserver-96c4c446c-728v2 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{ string("https://192.168.34.10:2379"), + string("https://192.168.34.11:2379"), string("https://localhost:2379"), }, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "goaway-chance": []any{string("0.001")}, ... // 4 identical entries }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12"), ...}, } | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.34.10:2379,https://192.168.34.11:2379,https://localhost:2379 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-c57444595 |
SuccessfulDelete |
Deleted pod: apiserver-c57444595-mj7cx | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: l\" ...\nNodeInstallerDegraded: I1014 13:10:03.558913 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/kube-controller-manager-pod/version\" ...\nNodeInstallerDegraded: I1014 13:10:03.559051 1 cmd.go:277] Creating directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/recycler-config\" ...\nNodeInstallerDegraded: I1014 13:10:03.559184 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/recycler-config/recycler-pod.yaml\" ...\nNodeInstallerDegraded: I1014 13:10:03.559351 1 cmd.go:277] Creating directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/service-ca\" ...\nNodeInstallerDegraded: I1014 13:10:03.559457 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/service-ca/ca-bundle.crt\" ...\nNodeInstallerDegraded: I1014 13:10:03.559604 1 cmd.go:277] Creating directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/serviceaccount-ca\" ...\nNodeInstallerDegraded: I1014 13:10:03.559719 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/serviceaccount-ca/ca-bundle.crt\" ...\nNodeInstallerDegraded: I1014 13:10:03.559861 1 cmd.go:221] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\" ...\nNodeInstallerDegraded: I1014 13:10:03.559953 1 cmd.go:229] Getting secrets ...\nNodeInstallerDegraded: I1014 13:10:03.757643 1 copy.go:32] Got secret openshift-kube-controller-manager/csr-signer\nNodeInstallerDegraded: I1014 13:10:03.956160 1 copy.go:32] Got secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key\nNodeInstallerDegraded: I1014 13:10:03.956539 1 cmd.go:242] Getting config maps ...\nNodeInstallerDegraded: I1014 13:10:04.156837 1 copy.go:60] Got configMap openshift-kube-controller-manager/aggregator-client-ca\nNodeInstallerDegraded: I1014 13:10:04.355849 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/client-ca: configmaps \"client-ca\" not found\nNodeInstallerDegraded: F1014 13:10:04.557705 1 cmd.go:109] failed to copy: configmaps \"client-ca\" not found\nNodeInstallerDegraded: ") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-all-bundles-2 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 5 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master-1" from revision 0 to 1 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-scheduler because it was missing | |
(x9) | openshift-route-controller-manager |
kubelet |
route-controller-manager-7f89f9db8c-j4hd5 |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-3-retry-1-master-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-3-retry-1-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-3-retry-1-master-1 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-3-retry-1-master-1 |
Started |
Started container installer | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager |
multus |
installer-3-retry-1-master-1 |
AddedInterface |
Add eth0 [10.128.0.60/23] from ovn-kubernetes | |
(x9) | openshift-controller-manager |
kubelet |
controller-manager-bcf7659b-pckjm |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from False to True ("GuardControllerDegraded: Missing operand on node master-2") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "master-2" from revision 0 to 2 because node master-2 static pod not found | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 5 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-1 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/metrics-server -n openshift-monitoring because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 5" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
APIServiceCreated |
Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing | |
(x39) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-2-master-2 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd |
multus |
installer-2-master-2 |
AddedInterface |
Add eth0 [10.129.0.32/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd |
kubelet |
installer-2-master-2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-5-master-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-5-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler |
multus |
installer-5-master-1 |
AddedInterface |
Add eth0 [10.128.0.61/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-5-master-1 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-5-master-1 |
Created |
Created container: installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]",Progressing changed from False to True ("NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 1" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-1" from revision 0 to 1 because node master-1 static pod not found | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready") | |
openshift-etcd |
kubelet |
installer-2-master-2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" in 2.439s (2.439s including waiting). Image size: 511412209 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-etcd |
kubelet |
installer-2-master-2 |
Created |
Created container: installer | |
openshift-etcd |
kubelet |
installer-2-master-2 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-1-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded changed from False to True ("CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: l\" ...\nNodeInstallerDegraded: I1014 13:10:03.558913 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/kube-controller-manager-pod/version\" ...\nNodeInstallerDegraded: I1014 13:10:03.559051 1 cmd.go:277] Creating directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/recycler-config\" ...\nNodeInstallerDegraded: I1014 13:10:03.559184 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/recycler-config/recycler-pod.yaml\" ...\nNodeInstallerDegraded: I1014 13:10:03.559351 1 cmd.go:277] Creating directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/service-ca\" ...\nNodeInstallerDegraded: I1014 13:10:03.559457 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/service-ca/ca-bundle.crt\" ...\nNodeInstallerDegraded: I1014 13:10:03.559604 1 cmd.go:277] Creating directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/serviceaccount-ca\" ...\nNodeInstallerDegraded: I1014 13:10:03.559719 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/serviceaccount-ca/ca-bundle.crt\" ...\nNodeInstallerDegraded: I1014 13:10:03.559861 1 cmd.go:221] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\" ...\nNodeInstallerDegraded: I1014 13:10:03.559953 1 cmd.go:229] Getting secrets ...\nNodeInstallerDegraded: I1014 13:10:03.757643 1 copy.go:32] Got secret openshift-kube-controller-manager/csr-signer\nNodeInstallerDegraded: I1014 13:10:03.956160 1 copy.go:32] Got secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key\nNodeInstallerDegraded: I1014 13:10:03.956539 1 cmd.go:242] Getting config maps ...\nNodeInstallerDegraded: I1014 13:10:04.156837 1 copy.go:60] Got configMap openshift-kube-controller-manager/aggregator-client-ca\nNodeInstallerDegraded: I1014 13:10:04.355849 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/client-ca: configmaps \"client-ca\" not found\nNodeInstallerDegraded: F1014 13:10:04.557705 1 cmd.go:109] failed to copy: configmaps \"client-ca\" not found\nNodeInstallerDegraded: " to "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]",Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 4" | |
openshift-kube-apiserver |
kubelet |
installer-1-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
multus |
installer-1-master-1 |
AddedInterface |
Add eth0 [10.128.0.62/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-3-retry-1-master-1 |
Killing |
Stopping container installer | |
openshift-kube-apiserver |
kubelet |
installer-1-master-1 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-1-master-1 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-2-master-2 |
Killing |
Stopping container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-4-master-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
multus |
installer-4-master-1 |
AddedInterface |
Add eth0 [10.128.0.63/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-1 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-1 |
Created |
Created container: installer | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-3-master-2 -n openshift-etcd because it was missing | |
openshift-etcd |
multus |
installer-3-master-2 |
AddedInterface |
Add eth0 [10.129.0.33/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-3-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
kubelet |
installer-3-master-2 |
Created |
Created container: installer | |
openshift-etcd |
kubelet |
installer-3-master-2 |
Started |
Started container installer | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
(x9) | openshift-controller-manager |
kubelet |
controller-manager-55bcd8787f-4krnt |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.34.10:2379,https://192.168.34.11:2379 | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   ... // 2 identical entries   "routingConfig": map[string]any{"subdomain": string("apps.ocp.openstack.lab")},   "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")},   "storageConfig": map[string]any{   "urls": []any{   string("https://192.168.34.10:2379"), + string("https://192.168.34.11:2379"),   },   },   } | |
openshift-apiserver |
replicaset-controller |
apiserver-8644c46667 |
SuccessfulCreate |
Created pod: apiserver-8644c46667-cg62m | |
openshift-apiserver |
replicaset-controller |
apiserver-6576f6bc9d |
SuccessfulDelete |
Deleted pod: apiserver-6576f6bc9d-r2fhv | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-r2fhv |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-r2fhv |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-8644c46667 to 1 from 0 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-6576f6bc9d to 1 from 2 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 2, desired generation is 3.") | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 2, desired generation is 3." | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-canary namespace | |
openshift-ingress-canary |
daemonset-controller |
ingress-canary |
SuccessfulCreate |
Created pod: ingress-canary-j76rq | |
(x8) | openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-mj7cx |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-ingress-canary |
default-scheduler |
ingress-canary-2c8tn |
Scheduled |
Successfully assigned openshift-ingress-canary/ingress-canary-2c8tn to master-2 | |
openshift-ingress-canary |
kubelet |
ingress-canary-2c8tn |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "canary-serving-cert" not found | |
openshift-ingress-canary |
daemonset-controller |
ingress-canary |
SuccessfulCreate |
Created pod: ingress-canary-2c8tn | |
openshift-ingress-canary |
default-scheduler |
ingress-canary-j76rq |
Scheduled |
Successfully assigned openshift-ingress-canary/ingress-canary-j76rq to master-1 | |
openshift-ingress-canary |
kubelet |
ingress-canary-j76rq |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "canary-serving-cert" not found | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3." | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available" | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-ingress-canary |
multus |
ingress-canary-j76rq |
AddedInterface |
Add eth0 [10.128.0.64/23] from ovn-kubernetes | |
openshift-ingress-canary |
kubelet |
ingress-canary-j76rq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:805f1bf09553ecf2e9d735c881539c011947eee7bf4c977b074e2d0396b9d99a" already present on machine | |
openshift-ingress-canary |
kubelet |
ingress-canary-j76rq |
Started |
Started container serve-healthcheck-canary | |
openshift-ingress-canary |
multus |
ingress-canary-2c8tn |
AddedInterface |
Add eth0 [10.129.0.34/23] from ovn-kubernetes | |
openshift-ingress-canary |
kubelet |
ingress-canary-j76rq |
Created |
Created container: serve-healthcheck-canary | |
openshift-ingress-canary |
kubelet |
ingress-canary-2c8tn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:805f1bf09553ecf2e9d735c881539c011947eee7bf4c977b074e2d0396b9d99a" | |
openshift-ingress-canary |
kubelet |
ingress-canary-2c8tn |
Created |
Created container: serve-healthcheck-canary | |
openshift-ingress-canary |
kubelet |
ingress-canary-2c8tn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:805f1bf09553ecf2e9d735c881539c011947eee7bf4c977b074e2d0396b9d99a" in 2.024s (2.024s including waiting). Image size: 504222816 bytes. | |
openshift-ingress-canary |
kubelet |
ingress-canary-2c8tn |
Started |
Started container serve-healthcheck-canary | |
(x9) | openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-mj7cx |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
(x4) | openshift-oauth-apiserver |
default-scheduler |
apiserver-96c4c446c-728v2 |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. |
(x9) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6f6c689d49-xd4xv |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
ProbeError |
Readiness probe error: Get "https://192.168.34.11:10259/healthz": EOF body: | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Unhealthy |
Readiness probe failed: Get "https://192.168.34.11:10259/healthz": EOF | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Killing |
Stopping container kube-scheduler | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-scheduler |
static-pod-installer |
installer-5-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Killing |
Stopping container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Killing |
Stopping container kube-scheduler-cert-syncer | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-96c4c446c-728v2 |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-96c4c446c-728v2 to master-2 | |
openshift-oauth-apiserver |
kubelet |
apiserver-96c4c446c-728v2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-oauth-apiserver |
multus |
apiserver-96c4c446c-728v2 |
AddedInterface |
Add eth0 [10.129.0.35/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-96c4c446c-728v2 |
Created |
Created container: fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-96c4c446c-728v2 |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-96c4c446c-728v2 |
Started |
Started container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-96c4c446c-728v2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-96c4c446c-728v2 |
Created |
Created container: oauth-apiserver | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-c57444595 |
SuccessfulDelete |
Deleted pod: apiserver-c57444595-zs4m8 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-96c4c446c |
SuccessfulCreate |
Created pod: apiserver-96c4c446c-brl6n | |
openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-zs4m8 |
Killing |
Stopping container oauth-apiserver | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-96c4c446c to 2 from 1 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-c57444595 to 0 from 1 | |
(x6) | openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-r2fhv |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
(x6) | openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-r2fhv |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container setup | |
openshift-kube-apiserver |
static-pod-installer |
installer-1-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 1 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: setup | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.31.13" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "operator" changed from "" to "4.18.25" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.18.25"}] to [{"raw-internal" "4.18.25"} {"kube-apiserver" "1.31.13"} {"operator" "4.18.25"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "GuardControllerDegraded: [Missing PodIP in operand kube-apiserver-master-1 on node master-1, Missing operand on node master-2]" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-master-1 on node master-1, Missing operand on node master-2]" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node master-2" to "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-1_86bb464e-b509-4e2d-93f3-2fac0605d9e1 became leader | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-master-1 on node master-1, Missing operand on node master-2]" to "GuardControllerDegraded: Missing operand on node master-2" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/kube-apiserver-guard-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand kube-apiserver-master-1 on node master-1, Missing operand on node master-2]" to "GuardControllerDegraded: Missing operand on node master-2" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-master-1 |
Started |
Started container guard | |
openshift-kube-apiserver |
multus |
kube-apiserver-guard-master-1 |
AddedInterface |
Add eth0 [10.128.0.65/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-master-1 |
Created |
Created container: guard | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodUpdated |
Updated Pod/kube-apiserver-guard-master-1 -n openshift-kube-apiserver because it changed | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.18.25"}] to [{"raw-internal" "4.18.25"} {"kube-controller-manager" "1.31.13"} {"operator" "4.18.25"}] | |
openshift-kube-controller-manager |
static-pod-installer |
installer-4-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.31.13" |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "operator" changed from "" to "4.18.25" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-master-1 on node master-1, Missing operand on node master-2]" | |
(x11) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-master-1 |
ProbeError |
Readiness probe error: Get "https://192.168.34.11:10259/healthz": dial tcp 192.168.34.11:10259: connect: connection refused body: |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4" | |
(x11) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-master-1 |
Unhealthy |
Readiness probe failed: Get "https://192.168.34.11:10259/healthz": dial tcp 192.168.34.11:10259: connect: connection refused |
(x6) | openshift-apiserver |
default-scheduler |
apiserver-8644c46667-cg62m |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4" in 2.659s (2.659s including waiting). Image size: 498279559 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-master-1 on node master-1, Missing operand on node master-2]" to "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-master-1 on node master-1, Missing operand on node master-2]\nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-1_6e172dbb-c777-4c5d-a268-afb115624621 became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-1 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
multus |
kube-controller-manager-guard-master-1 |
AddedInterface |
Add eth0 [10.128.0.66/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodCreated |
Created Pod/kube-controller-manager-guard-master-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-master-1 on node master-1, Missing operand on node master-2]\nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "GuardControllerDegraded: Missing operand on node master-2\nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-master-1 |
Created |
Created container: guard | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-master-1 |
Started |
Started container guard | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodUpdated |
Updated Pod/kube-controller-manager-guard-master-1 -n openshift-kube-controller-manager because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node master-2\nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "GuardControllerDegraded: Missing operand on node master-2" | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" | |
openshift-etcd |
static-pod-installer |
installer-3-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 3 | |
openshift-apiserver |
default-scheduler |
apiserver-8644c46667-cg62m |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-8644c46667-cg62m to master-2 | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" in 2.409s (2.409s including waiting). Image size: 531186824 bytes. | |
openshift-apiserver |
kubelet |
apiserver-8644c46667-cg62m |
Started |
Started container fix-audit-permissions | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcd-ensure-env-vars | |
(x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Started |
Started container wait-for-host-port |
(x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Created |
Created container: wait-for-host-port |
(x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine |
openshift-apiserver |
kubelet |
apiserver-8644c46667-cg62m |
Created |
Created container: fix-audit-permissions | |
openshift-apiserver |
multus |
apiserver-8644c46667-cg62m |
AddedInterface |
Add eth0 [10.129.0.36/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-8644c46667-cg62m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: setup | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcd-resources-copy | |
openshift-apiserver |
kubelet |
apiserver-8644c46667-cg62m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-8644c46667-cg62m |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-8644c46667-cg62m |
Created |
Created container: openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-8644c46667-cg62m |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-8644c46667-cg62m |
Created |
Created container: openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-8644c46667-cg62m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcd-metrics | |
(x8) | openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-zs4m8 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcd-rev | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-apiserver |
replicaset-controller |
apiserver-6576f6bc9d |
SuccessfulDelete |
Deleted pod: apiserver-6576f6bc9d-xfzjr | |
openshift-etcd |
multus |
etcd-guard-master-2 |
AddedInterface |
Add eth0 [10.129.0.37/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
etcd-guard-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-apiserver |
replicaset-controller |
apiserver-8644c46667 |
SuccessfulCreate |
Created pod: apiserver-8644c46667-7z9ft | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-xfzjr |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-xfzjr |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodCreated |
Created Pod/etcd-guard-master-2 -n openshift-etcd because it was missing | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-8644c46667 to 2 from 1 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-6576f6bc9d to 0 from 1 | |
openshift-etcd |
kubelet |
etcd-guard-master-2 |
Created |
Created container: guard | |
openshift-etcd |
kubelet |
etcd-guard-master-2 |
Started |
Started container guard | |
(x6) | openshift-oauth-apiserver |
default-scheduler |
apiserver-96c4c446c-brl6n |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-1" from revision 0 to 4 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 4",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 0; 1 node is at revision 4") | |
(x9) | openshift-oauth-apiserver |
kubelet |
apiserver-c57444595-zs4m8 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-2" from revision 0 to 4 because node master-2 static pod not found | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-4-master-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
multus |
installer-4-master-2 |
AddedInterface |
Add eth0 [10.129.0.38/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-controller-manager because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-86659fd8d to 1 from 0 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodUpdated |
Updated Pod/etcd-guard-master-2 -n openshift-etcd because it changed | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-7f89f9db8c to 0 from 1 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-bcf7659b |
SuccessfulDelete |
Deleted pod: controller-manager-bcf7659b-pckjm | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-bcf7659b to 0 from 1 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-5745565d84-5l45t_40d3da8f-b44f-41f6-b593-df0bb53b01bd became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-controller-manager |
replicaset-controller |
controller-manager-86659fd8d |
SuccessfulCreate |
Created pod: controller-manager-86659fd8d-zhj4d | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-6f6c689d49 to 0 from 1 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7f89f9db8c |
SuccessfulDelete |
Deleted pod: route-controller-manager-7f89f9db8c-j4hd5 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-67b9857c45 |
SuccessfulCreate |
Created pod: route-controller-manager-67b9857c45-pxqsr | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6f6c689d49 |
SuccessfulDelete |
Deleted pod: route-controller-manager-6f6c689d49-xd4xv | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-77674cffc8 to 1 from 0 | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-67b9857c45-pxqsr |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager |
replicaset-controller |
controller-manager-55bcd8787f |
SuccessfulDelete |
Deleted pod: controller-manager-55bcd8787f-4krnt | |
openshift-controller-manager |
replicaset-controller |
controller-manager-56cfb99cfd |
SuccessfulCreate |
Created pod: controller-manager-56cfb99cfd-rq5ck | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-67b9857c45 to 1 from 0 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-77674cffc8 |
SuccessfulCreate |
Created pod: route-controller-manager-77674cffc8-k5fvv | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 2" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 2" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" in 2.654s (2.654s including waiting). Image size: 501914388 bytes. | |
(x2) | openshift-route-controller-manager |
default-scheduler |
route-controller-manager-77674cffc8-k5fvv |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-67b9857c45-pxqsr |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-67b9857c45-pxqsr to master-2 | |
(x2) | openshift-controller-manager |
default-scheduler |
controller-manager-86659fd8d-zhj4d |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. |
(x2) | openshift-controller-manager |
default-scheduler |
controller-manager-56cfb99cfd-rq5ck |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. |
openshift-oauth-apiserver |
default-scheduler |
apiserver-96c4c446c-brl6n |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-96c4c446c-brl6n to master-1 | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-2 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-2 |
Created |
Created container: installer | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 2" to "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 2" | |
openshift-oauth-apiserver |
kubelet |
apiserver-96c4c446c-brl6n |
Created |
Created container: oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-96c4c446c-brl6n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 1",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 0; 1 node is at revision 1") | |
openshift-oauth-apiserver |
kubelet |
apiserver-96c4c446c-brl6n |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
multus |
apiserver-96c4c446c-brl6n |
AddedInterface |
Add eth0 [10.128.0.67/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-96c4c446c-brl6n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-96c4c446c-brl6n |
Created |
Created container: fix-audit-permissions | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-67b9857c45-pxqsr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "master-1" from revision 0 to 1 because static pod is ready | |
openshift-route-controller-manager |
multus |
route-controller-manager-67b9857c45-pxqsr |
AddedInterface |
Add eth0 [10.129.0.39/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-96c4c446c-brl6n |
Started |
Started container oauth-apiserver | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Created |
Created container: kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-1_8e0bc555-f8ad-4f84-b6c8-9c43fe404d32 became leader | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-67b9857c45-pxqsr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08" in 1.861s (1.861s including waiting). Image size: 480132757 bytes. | |
openshift-controller-manager |
kubelet |
controller-manager-56cfb99cfd-rq5ck |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-67b9857c45-pxqsr |
Started |
Started container route-controller-manager | |
openshift-controller-manager |
multus |
controller-manager-86659fd8d-zhj4d |
AddedInterface |
Add eth0 [10.128.0.69/23] from ovn-kubernetes | |
openshift-route-controller-manager |
multus |
route-controller-manager-77674cffc8-k5fvv |
AddedInterface |
Add eth0 [10.128.0.68/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 2" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 2" | |
openshift-controller-manager |
multus |
controller-manager-56cfb99cfd-rq5ck |
AddedInterface |
Add eth0 [10.129.0.40/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-86659fd8d-zhj4d |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-77674cffc8-k5fvv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-2" from revision 0 to 1 because node master-2 static pod not found | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-67b9857c45-pxqsr |
Created |
Created container: route-controller-manager | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-67b9857c45-pxqsr_fadf1a82-2f6b-4706-8809-bddad69eaeb8 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberAddAsLearner |
successfully added new member https://192.168.34.12:2380 | |
(x3) | openshift-etcd |
kubelet |
etcd-guard-master-2 |
Unhealthy |
Readiness probe failed: Get "https://192.168.34.12:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) |
openshift-controller-manager |
replicaset-controller |
controller-manager-86659fd8d |
SuccessfulDelete |
Deleted pod: controller-manager-86659fd8d-zhj4d | |
openshift-controller-manager |
kubelet |
controller-manager-56cfb99cfd-rq5ck |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6" in 2.564s (2.564s including waiting). Image size: 551247630 bytes. | |
(x4) | openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
(combined from similar events): Scaled up replica set controller-manager-56cfb99cfd to 2 from 1 |
openshift-controller-manager |
replicaset-controller |
controller-manager-56cfb99cfd |
SuccessfulCreate |
Created pod: controller-manager-56cfb99cfd-9798f | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-1-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager |
kubelet |
controller-manager-56cfb99cfd-rq5ck |
Started |
Started container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-56cfb99cfd-rq5ck |
Created |
Created container: controller-manager | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-56cfb99cfd-rq5ck became leader | |
openshift-controller-manager |
kubelet |
controller-manager-86659fd8d-zhj4d |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6" in 3.205s (3.205s including waiting). Image size: 551247630 bytes. | |
openshift-controller-manager |
kubelet |
controller-manager-86659fd8d-zhj4d |
Created |
Created container: controller-manager | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberPromote |
successfully promoted learner member https://192.168.34.12:2380 | |
openshift-controller-manager |
default-scheduler |
controller-manager-56cfb99cfd-9798f |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-77674cffc8-k5fvv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08" in 3.138s (3.138s including waiting). Image size: 480132757 bytes. | |
openshift-controller-manager |
kubelet |
controller-manager-86659fd8d-zhj4d |
Started |
Started container controller-manager | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-77674cffc8-gf5tz |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager |
kubelet |
controller-manager-86659fd8d-zhj4d |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.69:8443/healthz": read tcp 10.128.0.2:49768->10.128.0.69:8443: read: connection reset by peer | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-77674cffc8 |
SuccessfulCreate |
Created pod: route-controller-manager-77674cffc8-gf5tz | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-67b9857c45 |
SuccessfulDelete |
Deleted pod: route-controller-manager-67b9857c45-pxqsr | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-67b9857c45 to 0 from 1 | |
openshift-controller-manager |
kubelet |
controller-manager-86659fd8d-zhj4d |
ProbeError |
Readiness probe error: Get "https://10.128.0.69:8443/healthz": read tcp 10.128.0.2:49768->10.128.0.69:8443: read: connection reset by peer body: | |
openshift-kube-apiserver |
multus |
installer-1-master-2 |
AddedInterface |
Add eth0 [10.129.0.41/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-86659fd8d-zhj4d |
Killing |
Stopping container controller-manager | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-77674cffc8 to 2 from 1 | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-67b9857c45-pxqsr |
Killing |
Stopping container route-controller-manager | |
openshift-kube-apiserver |
kubelet |
installer-1-master-2 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-1-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-1-master-2 |
Created |
Created container: installer | |
openshift-controller-manager |
default-scheduler |
controller-manager-56cfb99cfd-9798f |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-56cfb99cfd-9798f to master-1 | |
openshift-marketplace |
kubelet |
community-operators-cf69d |
Killing |
Stopping container registry-server | |
openshift-marketplace |
default-scheduler |
community-operators-7flhc |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-7flhc to master-2 | |
openshift-marketplace |
kubelet |
certified-operators-kpbmd |
Killing |
Stopping container registry-server | |
openshift-marketplace |
default-scheduler |
certified-operators-629l7 |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-629l7 to master-2 | |
openshift-marketplace |
kubelet |
certified-operators-629l7 |
Started |
Started container extract-utilities | |
openshift-controller-manager |
kubelet |
controller-manager-56cfb99cfd-9798f |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6" already present on machine | |
openshift-controller-manager |
kubelet |
controller-manager-56cfb99cfd-9798f |
Started |
Started container controller-manager | |
openshift-marketplace |
multus |
community-operators-7flhc |
AddedInterface |
Add eth0 [10.129.0.43/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-7flhc |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-controller-manager |
kubelet |
controller-manager-56cfb99cfd-9798f |
Created |
Created container: controller-manager | |
openshift-marketplace |
kubelet |
certified-operators-629l7 |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-7flhc |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-7flhc |
Created |
Created container: extract-utilities | |
openshift-marketplace |
multus |
certified-operators-629l7 |
AddedInterface |
Add eth0 [10.129.0.42/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-629l7 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-629l7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-controller-manager |
multus |
controller-manager-56cfb99cfd-9798f |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-7flhc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-7flhc |
Started |
Started container extract-content | |
openshift-route-controller-manager |
multus |
route-controller-manager-77674cffc8-gf5tz |
AddedInterface |
Add eth0 [10.129.0.44/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-frksz |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
community-operators-7flhc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
community-operators-7flhc |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-7flhc |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 592ms (592ms including waiting). Image size: 1181047702 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-629l7 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-629l7 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-629l7 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 799ms (799ms including waiting). Image size: 1199160216 bytes. | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-77674cffc8-gf5tz |
Started |
Started container route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-77674cffc8-gf5tz |
Created |
Created container: route-controller-manager | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-77674cffc8-gf5tz_02ba370c-dd52-4f6f-bb00-f6979c62f986 became leader | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-77674cffc8-gf5tz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08" already present on machine | |
openshift-marketplace |
default-scheduler |
redhat-operators-m2vwm |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-m2vwm to master-2 | |
openshift-marketplace |
kubelet |
certified-operators-629l7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
redhat-operators-xl9gv |
Killing |
Stopping container registry-server | |
openshift-marketplace |
default-scheduler |
redhat-marketplace-2p79c |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-2p79c to master-2 | |
openshift-marketplace |
kubelet |
community-operators-7flhc |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-7flhc |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-7flhc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 487ms (487ms including waiting). Image size: 911296197 bytes. | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p79c |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p79c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-m2vwm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-629l7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 1.164s (1.164s including waiting). Image size: 911296197 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p79c |
Created |
Created container: extract-utilities | |
openshift-marketplace |
multus |
redhat-operators-m2vwm |
AddedInterface |
Add eth0 [10.129.0.46/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-m2vwm |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-m2vwm |
Created |
Created container: extract-utilities | |
openshift-marketplace |
multus |
redhat-marketplace-2p79c |
AddedInterface |
Add eth0 [10.129.0.45/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-629l7 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-m2vwm |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p79c |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-629l7 |
Created |
Created container: registry-server | |
(x5) | openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-xfzjr |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-marketplace |
kubelet |
redhat-marketplace-2p79c |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 597ms (597ms including waiting). Image size: 1057212814 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p79c |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
redhat-operators-m2vwm |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p79c |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-m2vwm |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p79c |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-m2vwm |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 889ms (889ms including waiting). Image size: 1629241735 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p79c |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p79c |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 505ms (505ms including waiting). Image size: 911296197 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p79c |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-m2vwm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
redhat-operators-m2vwm |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-m2vwm |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-m2vwm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 550ms (550ms including waiting). Image size: 911296197 bytes. | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.18.25"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.18.25" | |
(x6) | openshift-apiserver |
kubelet |
apiserver-6576f6bc9d-xfzjr |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
openshift-marketplace |
kubelet |
redhat-operators-m2vwm |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master-2" from revision 0 to 3 because static pod is ready | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.6af36d024ef3f7a3 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-96c4c446c |
SuccessfulDelete |
Deleted pod: apiserver-96c4c446c-brl6n | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{ string("https://192.168.34.10:2379"), string("https://192.168.34.11:2379"), + string("https://192.168.34.12:2379"), string("https://localhost:2379"), }, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "goaway-chance": []any{string("0.001")}, ... // 4 identical entries }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12"), ...}, } | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 2, desired generation is 3.") | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n\u00a0\u00a0\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\t\"etcd-servers\": []any{\n\u00a0\u00a0\t\t\tstring(\"https://192.168.34.10:2379\"),\n\u00a0\u00a0\t\t\tstring(\"https://192.168.34.11:2379\"),\n+\u00a0\t\t\tstring(\"https://192.168.34.12:2379\"),\n\u00a0\u00a0\t\t},\n\u00a0\u00a0\t\t\"tls-cipher-suites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n\u00a0\u00a0\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n" | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-96c4c446c to 1 from 2 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/config has changed" | |
openshift-oauth-apiserver |
kubelet |
apiserver-96c4c446c-brl6n |
Killing |
Stopping container oauth-apiserver | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-7b6784d654 |
SuccessfulCreate |
Created pod: apiserver-7b6784d654-s9576 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-7b6784d654 to 1 from 0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-etcd |
multus |
installer-3-master-1 |
AddedInterface |
Add eth0 [10.128.0.71/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-3-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
kubelet |
installer-3-master-1 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-3-master-1 |
Created |
Created container: installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node master-2" to "GuardControllerDegraded: Missing PodIP in operand kube-controller-manager-master-2 on node master-2" | |
openshift-kube-controller-manager |
static-pod-installer |
installer-4-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing PodIP in operand kube-controller-manager-master-2 on node master-2" to "GuardControllerDegraded: Missing PodIP in operand kube-controller-manager-master-2 on node master-2\nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
(x6) | openshift-apiserver |
default-scheduler |
apiserver-8644c46667-7z9ft |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
kubelet |
installer-3-master-1 |
Killing |
Stopping container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodCreated |
Created Pod/kube-controller-manager-guard-master-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: ") | |
openshift-kube-controller-manager |
multus |
kube-controller-manager-guard-master-2 |
AddedInterface |
Add eth0 [10.129.0.47/23] from ovn-kubernetes | |
(x2) | openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   ... // 2 identical entries   "routingConfig": map[string]any{"subdomain": string("apps.ocp.openstack.lab")},   "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")},   "storageConfig": map[string]any{   "urls": []any{   string("https://192.168.34.10:2379"),   string("https://192.168.34.11:2379"), + string("https://192.168.34.12:2379"),   },   },   } |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" in 8.626s (8.626s including waiting). Image size: 945482213 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-master-2 |
Created |
Created container: guard | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-master-2 |
Started |
Started container guard | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4" in 1.904s (1.904s including waiting). Image size: 498279559 bytes. | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/config has changed" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-2 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-apiserver |
static-pod-installer |
installer-1-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 1 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-apiserver |
replicaset-controller |
apiserver-595d5f74d8 |
SuccessfulCreate |
Created pod: apiserver-595d5f74d8-hck8v | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-apiserver |
default-scheduler |
apiserver-595d5f74d8-hck8v |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-595d5f74d8-hck8v to master-1 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/2 pods have been updated to the latest generation and 1/2 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/2 pods have been updated to the latest generation and 1/2 pods are available\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4." | |
openshift-etcd |
kubelet |
installer-4-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
multus |
installer-4-master-1 |
AddedInterface |
Add eth0 [10.128.0.72/23] from ovn-kubernetes | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/2 pods have been updated to the latest generation and 1/2 pods are available\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 3, desired generation is 4.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4." | |
openshift-apiserver |
replicaset-controller |
apiserver-8644c46667 |
SuccessfulDelete |
Deleted pod: apiserver-8644c46667-7z9ft | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-hck8v |
Created |
Created container: fix-audit-permissions | |
openshift-apiserver |
multus |
apiserver-595d5f74d8-hck8v |
AddedInterface |
Add eth0 [10.128.0.73/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-4-master-1 |
Created |
Created container: installer | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-hck8v |
Started |
Started container fix-audit-permissions | |
openshift-etcd |
kubelet |
installer-4-master-1 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Created |
Created container: setup | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-hck8v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-hck8v |
Started |
Started container openshift-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-hck8v |
Created |
Created container: openshift-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-hck8v |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-hck8v |
Created |
Created container: openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-hck8v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-hck8v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodUpdated |
Updated Pod/kube-controller-manager-guard-master-2 -n openshift-kube-controller-manager because it changed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 3, desired generation is 4.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 3, desired generation is 4." | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node master-2" to "GuardControllerDegraded: Missing PodIP in operand kube-apiserver-master-2 on node master-2" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 3, desired generation is 4." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available" | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-2 |
KubeAPIReadyz |
readyz=true | |
openshift-apiserver |
kubelet |
apiserver-8644c46667-cg62m |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
(x4) | openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
(combined from similar events): Scaled up replica set apiserver-595d5f74d8 to 2 from 1 |
openshift-apiserver |
replicaset-controller |
apiserver-8644c46667 |
SuccessfulDelete |
Deleted pod: apiserver-8644c46667-cg62m | |
openshift-apiserver |
kubelet |
apiserver-8644c46667-cg62m |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver |
replicaset-controller |
apiserver-595d5f74d8 |
SuccessfulCreate |
Created pod: apiserver-595d5f74d8-ttb94 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-etcd |
kubelet |
installer-4-master-1 |
Killing |
Stopping container installer | |
(x2) | openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/2 pods have been updated to the latest generation and 1/2 pods are available" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 1; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 0; 1 node is at revision 1" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 0; 1 node is at revision 1; 0 nodes have achieved new revision 2" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 5",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 0; 1 node is at revision 5") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "master-1" from revision 0 to 5 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 2 nodes are at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 0; 1 node is at revision 4" to "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 4" | |
openshift-etcd |
kubelet |
installer-5-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-2" from revision 0 to 4 because static pod is ready | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "master-2" from revision 0 to 5 because node master-2 static pod not found | |
openshift-etcd |
multus |
installer-5-master-1 |
AddedInterface |
Add eth0 [10.128.0.74/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/kube-apiserver-guard-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-master-2 |
Started |
Started container guard | |
openshift-etcd |
kubelet |
installer-5-master-1 |
Created |
Created container: installer | |
openshift-kube-apiserver |
multus |
kube-apiserver-guard-master-2 |
AddedInterface |
Add eth0 [10.129.0.48/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-master-2 |
Created |
Created container: guard | |
openshift-etcd |
kubelet |
installer-5-master-1 |
Started |
Started container installer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-5-master-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-scheduler |
multus |
installer-5-master-2 |
AddedInterface |
Add eth0 [10.129.0.49/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-5-master-2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" | |
openshift-kube-scheduler |
kubelet |
installer-5-master-2 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-5-master-2 |
Created |
Created container: installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-2-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-5-master-2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" in 1.802s (1.802s including waiting). Image size: 499422833 bytes. | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver |
kubelet |
installer-2-master-2 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-2-master-2 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-2-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
multus |
installer-2-master-2 |
AddedInterface |
Add eth0 [10.129.0.50/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodUpdated |
Updated Pod/kube-apiserver-guard-master-2 -n openshift-kube-apiserver because it changed | |
(x6) | openshift-oauth-apiserver |
default-scheduler |
apiserver-7b6784d654-s9576 |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. |
(x9) | openshift-oauth-apiserver |
kubelet |
apiserver-96c4c446c-brl6n |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
(x9) | openshift-oauth-apiserver |
kubelet |
apiserver-96c4c446c-brl6n |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-7b6784d654-s9576 |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-7b6784d654-s9576 to master-1 | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-s9576 |
Created |
Created container: fix-audit-permissions | |
openshift-oauth-apiserver |
multus |
apiserver-7b6784d654-s9576 |
AddedInterface |
Add eth0 [10.128.0.75/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-s9576 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-s9576 |
Started |
Started container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-s9576 |
Created |
Created container: oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-s9576 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-s9576 |
Started |
Started container fix-audit-permissions | |
(x7) | openshift-apiserver |
kubelet |
apiserver-8644c46667-cg62m |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
(x7) | openshift-apiserver |
kubelet |
apiserver-8644c46667-cg62m |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-96c4c446c to 0 from 1 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-7b6784d654 to 2 from 1 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-7b6784d654 |
SuccessfulCreate |
Created pod: apiserver-7b6784d654-l7lmp | |
openshift-oauth-apiserver |
kubelet |
apiserver-96c4c446c-728v2 |
Killing |
Stopping container oauth-apiserver | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-96c4c446c |
SuccessfulDelete |
Deleted pod: apiserver-96c4c446c-728v2 | |
(x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/2 pods have been updated to the latest generation and 1/2 pods are available" |
openshift-kube-scheduler |
static-pod-installer |
installer-5-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Created |
Created container: wait-for-host-port | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node master-2" to "GuardControllerDegraded: Missing PodIP in operand openshift-kube-scheduler-master-2 on node master-2" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Started |
Started container wait-for-host-port | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Created |
Created container: kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Started |
Started container kube-scheduler-recovery-controller | |
(x5) | openshift-apiserver |
default-scheduler |
apiserver-595d5f74d8-ttb94 |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. |
openshift-etcd |
static-pod-installer |
installer-5-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
static-pod-installer |
installer-2-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 2 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/openshift-kube-scheduler-guard-master-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-master-2 |
Started |
Started container guard | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-2 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-2 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-master-2 |
Created |
Created container: guard | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler |
multus |
openshift-kube-scheduler-guard-master-2 |
AddedInterface |
Add eth0 [10.129.0.51/23] from ovn-kubernetes | |
openshift-apiserver |
default-scheduler |
apiserver-595d5f74d8-ttb94 |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-595d5f74d8-ttb94 to master-2 | |
(x4) | openshift-oauth-apiserver |
default-scheduler |
apiserver-7b6784d654-l7lmp |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodUpdated |
Updated Pod/openshift-kube-scheduler-guard-master-2 -n openshift-kube-scheduler because it changed | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-ttb94 |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-ttb94 |
Created |
Created container: fix-audit-permissions | |
openshift-apiserver |
multus |
apiserver-595d5f74d8-ttb94 |
AddedInterface |
Add eth0 [10.129.0.52/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-ttb94 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-ttb94 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-ttb94 |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-ttb94 |
Created |
Created container: openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-ttb94 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-ttb94 |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-ttb94 |
Created |
Created container: openshift-apiserver | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
(x9) | openshift-oauth-apiserver |
kubelet |
apiserver-96c4c446c-728v2 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
(x9) | openshift-oauth-apiserver |
kubelet |
apiserver-96c4c446c-728v2 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
(x238) | openshift-ingress |
kubelet |
router-default-5ddb89f76-xf924 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed |
openshift-oauth-apiserver |
default-scheduler |
apiserver-7b6784d654-l7lmp |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-7b6784d654-l7lmp to master-2 | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-l7lmp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-oauth-apiserver |
multus |
apiserver-7b6784d654-l7lmp |
AddedInterface |
Add eth0 [10.129.0.53/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-l7lmp |
Started |
Started container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-l7lmp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-l7lmp |
Created |
Created container: oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-l7lmp |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-l7lmp |
Created |
Created container: fix-audit-permissions | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: setup | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 2 nodes are at revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 0; 1 node is at revision 5" to "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 5" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "master-2" from revision 0 to 5 because static pod is ready | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcd-readyz | |
(x11) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-master-2 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed |
(x2) | openshift-etcd |
kubelet |
etcd-guard-master-1 |
ProbeError |
Readiness probe error: Get "https://192.168.34.11:9980/readyz": context deadline exceeded body: |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapUpdated |
Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
openshift-etcd |
kubelet |
etcd-master-1 |
ProbeError |
Startup probe error: Get "https://192.168.34.11:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled down replica set metrics-server-8475fbcb68 to 1 from 2 | |
openshift-monitoring |
replicaset-controller |
metrics-server-8475fbcb68 |
SuccessfulDelete |
Deleted pod: metrics-server-8475fbcb68-8dq9n | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-76c4979bdc to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/metrics-server-3mlibjliosje2 -n openshift-monitoring because it was missing | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-76c4979bdc to 2 from 1 | |
openshift-monitoring |
kubelet |
metrics-server-8475fbcb68-8dq9n |
Killing |
Stopping container metrics-server | |
openshift-monitoring |
replicaset-controller |
metrics-server-76c4979bdc |
SuccessfulCreate |
Created pod: metrics-server-76c4979bdc-gds6w | |
openshift-monitoring |
replicaset-controller |
metrics-server-76c4979bdc |
SuccessfulCreate |
Created pod: metrics-server-76c4979bdc-mgff4 | |
(x2) | openshift-ingress |
kubelet |
router-default-5ddb89f76-887cs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:776b1203d0e4c0522ff38ffceeddfbad096e187b4d4c927f3ad89bac5f40d5c8" already present on machine |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-2 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-2 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-2 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-2 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29340795 | |
openshift-operator-lifecycle-manager |
default-scheduler |
collect-profiles-29340795-t5kx5 |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29340795-t5kx5 to master-2 | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29340795-t5kx5 |
AddedInterface |
Add eth0 [10.129.0.54/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29340795 |
SuccessfulCreate |
Created pod: collect-profiles-29340795-t5kx5 | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29340795-t5kx5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-apiserver-master-2 on node master-2" | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29340795-t5kx5 |
Started |
Started container collect-profiles | |
openshift-kube-apiserver |
cert-regeneration-controller |
openshift-kube-apiserver |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:localhost-recovery-client" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:scc:restricted-v2" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "cluster-admin" not found] | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29340795-t5kx5 |
Created |
Created container: collect-profiles | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-2 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-apiserver-master-2 on node master-2" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29340795 |
Completed |
Job completed | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29340795, condition: Complete | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, master-1 is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/etcd-master-1 container \"etcd\" started at 2025-10-14 13:14:23 +0000 UTC is still not ready\nEtcdMembersDegraded: 2 of 3 members are available, master-1 is unhealthy" | |
(x3) | openshift-ingress-operator |
kubelet |
ingress-operator-766ddf4575-xhdjt |
BackOff |
Back-off restarting failed container ingress-operator in pod ingress-operator-766ddf4575-xhdjt_openshift-ingress-operator(398ba6fd-0f8f-46af-b690-61a6eec9176b) |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master-1" from revision 1 to 5 because static pod is ready | |
(x3) | openshift-ingress-operator |
kubelet |
ingress-operator-766ddf4575-xhdjt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:805f1bf09553ecf2e9d735c881539c011947eee7bf4c977b074e2d0396b9d99a" already present on machine |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
(x4) | openshift-ingress-operator |
kubelet |
ingress-operator-766ddf4575-xhdjt |
Started |
Started container ingress-operator |
(x4) | openshift-ingress-operator |
kubelet |
ingress-operator-766ddf4575-xhdjt |
Created |
Created container: ingress-operator |
openshift-etcd |
kubelet |
installer-5-master-2 |
Created |
Created container: installer | |
openshift-etcd |
kubelet |
installer-5-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
multus |
installer-5-master-2 |
AddedInterface |
Add eth0 [10.129.0.55/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-5-master-2 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 1; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 2",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 0; 1 node is at revision 1; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 1; 1 node is at revision 2" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "master-2" from revision 0 to 2 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-1" from revision 1 to 2 because node master-1 with revision 1 is the oldest | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-2-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-2-master-1 |
Created |
Created container: installer | |
openshift-kube-apiserver |
multus |
installer-2-master-1 |
AddedInterface |
Add eth0 [10.128.0.76/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-2-master-1 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-2-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-etcd |
static-pod-installer |
installer-5-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
(x43) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerStuck |
unexpected addresses: 192.168.34.10 |
kube-system |
Required control plane pods have been created | ||||
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
static-pod-installer |
installer-2-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 2 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
(x5) | openshift-monitoring |
default-scheduler |
metrics-server-76c4979bdc-mgff4 |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. |
(x5) | openshift-monitoring |
default-scheduler |
metrics-server-76c4979bdc-gds6w |
FailedScheduling |
0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: setup | |
default |
apiserver |
openshift-kube-apiserver |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
default |
apiserver |
openshift-kube-apiserver |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_ed2630f1-04f0-4aa6-921b-5cc87053fa09 stopped leading | |
kube-system |
Required control plane pods have been created | ||||
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcd-readyz | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.apps.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.quota.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
Created <unknown>/v1.oauth.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.project.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.authorization.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.build.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
Created <unknown>/v1.user.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.image.openshift.io because it was missing | ||
(x4) | openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
OpenShiftAPICheckFailed |
"oauth.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
(x4) | openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
OpenShiftAPICheckFailed |
"user.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcd-rev | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.route.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.security.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.template.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource" | |
(x4) | openshift-etcd |
kubelet |
etcd-guard-master-2 |
ProbeError |
Readiness probe error: Get "https://192.168.34.12:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-2_0a90a8c3-9cb4-49b5-9af3-a6fdceca0c27 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-2_964bfb53-0387-4e66-8cc0-3de2a961fa72 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 3 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-6 -n openshift-kube-scheduler because it was missing | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-1_b97412e9-9ab0-4326-a090-f9fc33a66468 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 6 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerOK |
found expected kube-apiserver endpoints | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-scheduler because it was missing | |
(x7) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"route.openshift.io.v1" failed with an attempt failed with statusCode = 404, err = the server could not find the requested resource |
(x7) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"project.openshift.io.v1" failed with an attempt failed with statusCode = 404, err = the server could not find the requested resource |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.25" image="quay.io/openshift-release-dev/ocp-release@sha256:ba6f0f2eca65cd386a5109ddbbdb3bab9bb9801e32de56ef34f80e634a7787be" | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.25" image="quay.io/openshift-release-dev/ocp-release@sha256:ba6f0f2eca65cd386a5109ddbbdb3bab9bb9801e32de56ef34f80e634a7787be" | |
default |
node-controller |
master-1 |
RegisteredNode |
Node master-1 event: Registered Node master-1 in Controller | |
default |
node-controller |
master-2 |
RegisteredNode |
Node master-2 event: Registered Node master-2 in Controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-6 -n openshift-kube-scheduler because it was missing | |
(x7) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"template.openshift.io.v1" failed with an attempt failed with statusCode = 404, err = the server could not find the requested resource |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-6 -n openshift-kube-scheduler because it was missing | |
(x7) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"image.openshift.io.v1" failed with an attempt failed with statusCode = 404, err = the server could not find the requested resource |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.25" image="quay.io/openshift-release-dev/ocp-release@sha256:ba6f0f2eca65cd386a5109ddbbdb3bab9bb9801e32de56ef34f80e634a7787be" architecture="amd64" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-5 -n openshift-kube-controller-manager because it was missing | |
(x8) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"build.openshift.io.v1" failed with an attempt failed with statusCode = 404, err = the server could not find the requested resource |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing | |
(x8) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"apps.openshift.io.v1" failed with an attempt failed with statusCode = 404, err = the server could not find the requested resource |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 6 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-etcd |
kubelet |
etcd-master-2 |
ProbeError |
Startup probe error: Get "https://192.168.34.12:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-5 -n openshift-kube-controller-manager because it was missing | |
(x7) | openshift-monitoring |
kubelet |
metrics-server-8475fbcb68-8dq9n |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
(x7) | openshift-monitoring |
kubelet |
metrics-server-8475fbcb68-8dq9n |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]metric-storage-ready ok [+]metric-informer-sync ok [+]metadata-informer-sync ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.129.0.53:8443/apis/oauth.openshift.io/v1: Get \"https://10.129.0.53:8443/apis/oauth.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-5 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.129.0.53:8443/apis/oauth.openshift.io/v1: Get \"https://10.129.0.53:8443/apis/oauth.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "master-1" from revision 5 to 6 because node master-1 with revision 5 is the oldest | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from False to True ("NodeInstallerProgressing: 2 nodes are at revision 5; 0 nodes have achieved new revision 6"),Available message changed from "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 5" to "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 5; 0 nodes have achieved new revision 6" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-6-master-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-6-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
multus |
installer-6-master-1 |
AddedInterface |
Add eth0 [10.128.0.77/23] from ovn-kubernetes | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well") | |
openshift-kube-scheduler |
kubelet |
installer-6-master-1 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-6-master-1 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-1" from revision 4 to 5 because node master-1 with revision 4 is the oldest | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 2 nodes are at revision 4; 0 nodes have achieved new revision 5"),Available message changed from "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 4" to "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 4; 0 nodes have achieved new revision 5" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-5-master-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-5-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-5-master-1 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-5-master-1 |
Started |
Started container installer | |
openshift-kube-controller-manager |
multus |
installer-5-master-1 |
AddedInterface |
Add eth0 [10.128.0.78/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing | |
(x12) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-master-1 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed |
(x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine |
(x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Created |
Created container: kube-controller-manager |
(x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Started |
Started container kube-controller-manager |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-2_de99fd68-0a87-4ff7-ad0c-48f11b58e0a0 became leader | |
default |
node-controller |
master-2 |
RegisteredNode |
Node master-2 event: Registered Node master-2 in Controller | |
default |
node-controller |
master-1 |
RegisteredNode |
Node master-1 event: Registered Node master-1 in Controller | |
openshift-kube-scheduler |
static-pod-installer |
installer-6-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Killing |
Stopping container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Killing |
Stopping container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Killing |
Stopping container kube-scheduler-cert-syncer | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 3 triggered by "required secret/localhost-recovery-client-token has changed,required configmap/sa-token-signing-certs has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 3 triggered by "required secret/localhost-recovery-client-token has changed,required configmap/sa-token-signing-certs has changed" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Killing |
Stopping container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Killing |
Stopping container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
static-pod-installer |
installer-5-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Killing |
Stopping container kube-controller-manager-recovery-controller | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: " to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: " | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: " to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True (""),Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthConfigRouteDegraded: The OAuth server route 'openshift-authentication/oauth-openshift' was not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthConfigRouteDegraded: The OAuth server route 'openshift-authentication/oauth-openshift' was not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthConfigRouteDegraded: The OAuth server route 'openshift-authentication/oauth-openshift' was not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthConfigRouteDegraded: The OAuth server route 'openshift-authentication/oauth-openshift' was not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthConfigRouteDegraded: The OAuth server route 'openshift-authentication/oauth-openshift' was not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthConfigRouteDegraded: The OAuth server route 'openshift-authentication/oauth-openshift' was not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthConfigRouteDegraded: The OAuth server route 'openshift-authentication/oauth-openshift' was not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthConfigRouteDegraded: The OAuth server route 'openshift-authentication/oauth-openshift' was not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthConfigRouteDegraded: The OAuth server route 'openshift-authentication/oauth-openshift' was not found\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)",Progressing message changed from "" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
cluster-authentication-operator-metadata-controller-openshift-authentication-metadata |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthConfigRouteDegraded: The OAuth server route 'openshift-authentication/oauth-openshift' was not found\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthConfigRouteDegraded: The OAuth server route 'openshift-authentication/oauth-openshift' was not found\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing | |
(x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Started |
Started container kube-controller-manager |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
(x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Created |
Created container: kube-controller-manager |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthConfigRouteDegraded: The OAuth server route 'openshift-authentication/oauth-openshift' was not found\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing | |
(x2) | openshift-network-node-identity |
kubelet |
network-node-identity-5tzml |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine |
(x2) | openshift-network-node-identity |
kubelet |
network-node-identity-5tzml |
Created |
Created container: approver |
(x2) | openshift-network-node-identity |
kubelet |
network-node-identity-5tzml |
Started |
Started container approver |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Created |
Created container: kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cert-recovery-controller |
openshift-kube-controller-manager |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-1_8f8f69ef-0dac-4fb0-91e5-cbbe07a83521 became leader | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-1 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Started |
Started container kube-controller-manager-recovery-controller | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-2_187e7185-bd05-4401-b6f4-ce5fbe2cba8b became leader | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineOSBuilderFailed |
Failed to resync 4.18.25 because: failed to apply machine os builder manifests: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io machine-os-builder) | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Created |
Created container: wait-for-host-port | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
(x10) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
install strategy completed with no errors |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-55df5b4c9d to 1 from 0 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-6ddc4f49f9 |
SuccessfulCreate |
Created pod: oauth-openshift-6ddc4f49f9-9rn2t | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-authentication |
replicaset-controller |
oauth-openshift-6ddc4f49f9 |
SuccessfulCreate |
Created pod: oauth-openshift-6ddc4f49f9-thnnf | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-1 |
Created |
Created container: kube-scheduler | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container setup | |
openshift-authentication |
replicaset-controller |
oauth-openshift-6ddc4f49f9 |
SuccessfulDelete |
Deleted pod: oauth-openshift-6ddc4f49f9-9rn2t | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-authentication |
replicaset-controller |
oauth-openshift-55df5b4c9d |
SuccessfulCreate |
Created pod: oauth-openshift-55df5b4c9d-k6sz4 | |
default |
node-controller |
master-2 |
RegisteredNode |
Node master-2 event: Registered Node master-2 in Controller | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-6ddc4f49f9 to 2 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-6ddc4f49f9 to 1 from 2 | |
default |
node-controller |
master-1 |
RegisteredNode |
Node master-1 event: Registered Node master-1 in Controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-scheduler |
cert-recovery-controller |
openshift-kube-scheduler |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/2 pods have been updated to the latest generation and 0/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-1 |
CreatedSCCRanges |
created SCC ranges for openshift-console namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-1 |
CreatedSCCRanges |
created SCC ranges for openshift-console-operator namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-1 |
CreatedSCCRanges |
created SCC ranges for openshift-console-user-settings namespace | |
(x15) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again |
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
Etcd.operator.openshift.io "cluster" is invalid: status.nodeStatuses[1].currentRevision: Invalid value: "integer": must only increase | |
(x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
EtcdMembersErrorUpdatingStatus |
Etcd.operator.openshift.io "cluster" is invalid: status.nodeStatuses[1].currentRevision: Invalid value: "integer": must only increase |
openshift-console-operator |
deployment-controller |
console-operator |
ScalingReplicaSet |
Scaled up replica set console-operator-6768b5f5f9 to 1 | |
openshift-console-operator |
replicaset-controller |
console-operator-6768b5f5f9 |
SuccessfulCreate |
Created pod: console-operator-6768b5f5f9-6l8p6 | |
openshift-marketplace |
kubelet |
marketplace-operator-c4f798dd4-djh96 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c265fd635e36ef28c00f961a9969135e715f43af7f42455c9bde03a6b95ddc3e" already present on machine | |
openshift-marketplace |
kubelet |
marketplace-operator-c4f798dd4-djh96 |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.25:8080/healthz": dial tcp 10.128.0.25:8080: connect: connection refused | |
openshift-marketplace |
kubelet |
marketplace-operator-c4f798dd4-djh96 |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.25:8080/healthz": dial tcp 10.128.0.25:8080: connect: connection refused | |
openshift-marketplace |
kubelet |
marketplace-operator-c4f798dd4-djh96 |
ProbeError |
Readiness probe error: Get "http://10.128.0.25:8080/healthz": dial tcp 10.128.0.25:8080: connect: connection refused body: | |
openshift-marketplace |
kubelet |
marketplace-operator-c4f798dd4-djh96 |
ProbeError |
Liveness probe error: Get "http://10.128.0.25:8080/healthz": dial tcp 10.128.0.25:8080: connect: connection refused body: | |
(x2) | openshift-marketplace |
kubelet |
marketplace-operator-c4f798dd4-djh96 |
Created |
Created container: marketplace-operator |
(x2) | openshift-marketplace |
kubelet |
marketplace-operator-c4f798dd4-djh96 |
Started |
Started container marketplace-operator |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-1_1546cbb7-5bff-40a3-82f3-d5b9f3e4e65f became leader | |
openshift-monitoring |
replicaset-controller |
monitoring-plugin-75bcf9f5fd |
SuccessfulCreate |
Created pod: monitoring-plugin-75bcf9f5fd-xkw2l | |
openshift-monitoring |
replicaset-controller |
monitoring-plugin-75bcf9f5fd |
SuccessfulCreate |
Created pod: monitoring-plugin-75bcf9f5fd-5f2qh | |
openshift-monitoring |
deployment-controller |
monitoring-plugin |
ScalingReplicaSet |
Scaled up replica set monitoring-plugin-75bcf9f5fd to 2 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-network-node-identity |
master-1_68b48a99-133c-43f7-b79f-0c8f60a4c5f2 |
ovnkube-identity |
LeaderElection |
master-1_68b48a99-133c-43f7-b79f-0c8f60a4c5f2 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 4 triggered by "optional configmap/oauth-metadata has been created" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: etcdserver: request timed out, possibly due to previous leader failure\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-apiserver-master-1 on node master-1\nRevisionControllerDegraded: etcdserver: request timed out, possibly due to previous leader failure\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: etcdserver: request timed out, possibly due to previous leader failure" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing | |
(x16) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.34.10:2379"), string("https://192.168.34.11:2379"), string("https://192.168.34.12:2379"), string("https://localhost:2379")}, ...}, + "authConfig": map[string]any{ + "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), + }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12"), ...}, } |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: etcdserver: request timed out, possibly due to previous leader failure" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: etcdserver: request timed out, possibly due to previous leader failure\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-apiserver-master-1 on node master-1\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-apiserver-master-1 on node master-1" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-apiserver-master-1 on node master-1\nRevisionControllerDegraded: etcdserver: request timed out, possibly due to previous leader failure\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-apiserver-master-1 on node master-1\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded message changed from "KubeCloudConfigControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "All is well" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-1" from revision 4 to 5 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 4; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 1 node is at revision 4; 1 node is at revision 5",Available message changed from "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 4; 1 node is at revision 5" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n\u00a0\u00a0\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\t\"etcd-servers\": []any{\n-\u00a0\t\t\tstring(\"https://192.168.34.10:2379\"),\n\u00a0\u00a0\t\t\tstring(\"https://192.168.34.11:2379\"),\n\u00a0\u00a0\t\t\tstring(\"https://192.168.34.12:2379\"),\n\u00a0\u00a0\t\t},\n\u00a0\u00a0\t\t\"tls-cipher-suites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n\u00a0\u00a0\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{ - string("https://192.168.34.10:2379"), string("https://192.168.34.11:2379"), string("https://192.168.34.12:2379"), string("https://localhost:2379"), }, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "goaway-chance": []any{string("0.001")}, ... // 4 identical entries }, "authConfig": map[string]any{"oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/o"...)}, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, ... // 2 identical entries } | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-endpoints -n openshift-etcd: | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded message changed from "All is well" to "KubeCloudConfigControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.34.11:2379,https://192.168.34.12:2379 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.34.11:2379,https://192.168.34.12:2379,https://localhost:2379 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/2 pods have been updated to the latest generation and 0/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 3, desired generation is 4.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/2 pods have been updated to the latest generation and 0/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-84c8b8d745 to 1 from 0 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-7b6784d654 to 1 from 2 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-7b6784d654 |
SuccessfulDelete |
Deleted pod: apiserver-7b6784d654-l7lmp | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-l7lmp |
Killing |
Stopping container oauth-apiserver | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-2" from revision 4 to 5 because node master-2 with revision 4 is the oldest | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-84c8b8d745 |
SuccessfulCreate |
Created pod: apiserver-84c8b8d745-p4css | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
(x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-ddd7d64cd-hph6v |
Started |
Started container snapshot-controller |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-b9c5786fd to 1 from 0 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-779655cdb |
SuccessfulCreate |
Created pod: controller-manager-779655cdb-66gdt | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 3, desired generation is 4.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/2 pods have been updated to the latest generation and 0/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 0/2 pods have been updated to the latest generation and 2/2 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/2 pods have been updated to the latest generation and 0/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-5-master-2 -n openshift-kube-controller-manager because it was missing | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-77674cffc8-gf5tz |
Killing |
Stopping container route-controller-manager | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-56cfb99cfd to 1 from 2 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-56cfb99cfd |
SuccessfulDelete |
Deleted pod: controller-manager-56cfb99cfd-rq5ck | |
openshift-controller-manager |
kubelet |
controller-manager-56cfb99cfd-rq5ck |
Killing |
Stopping container controller-manager | |
(x2) | openshift-machine-config-operator |
kubelet |
machine-config-operator-7b75469658-j2dbc |
Started |
Started container machine-config-operator |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from False to True ("CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods") | |
openshift-machine-config-operator |
machine-config-operator |
master-1 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-77674cffc8 to 1 from 2 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-b9c5786fd |
SuccessfulCreate |
Created pod: route-controller-manager-b9c5786fd-ttfxr | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-779655cdb to 1 from 0 | |
(x2) | openshift-machine-config-operator |
kubelet |
machine-config-operator-7b75469658-j2dbc |
Created |
Created container: machine-config-operator |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-77674cffc8 |
SuccessfulDelete |
Deleted pod: route-controller-manager-77674cffc8-gf5tz | |
(x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-ddd7d64cd-hph6v |
Created |
Created container: snapshot-controller |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3." to "Progressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 2" | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-ddd7d64cd-hph6v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eeb8312c455dd728870a6332c7e36e9068f6031127ce3e481a9a1131da527265" already present on machine | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-ddd7d64cd-hph6v |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-ddd7d64cd-hph6v became leader | |
(x2) | openshift-machine-config-operator |
kubelet |
machine-config-operator-7b75469658-j2dbc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876" already present on machine |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3.") | |
(x2) | openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well") |
(x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/restore-etcd-pod -n openshift-etcd: cause by changes in data.pod.yaml,data.quorum-restore-pod.yaml |
openshift-kube-controller-manager |
multus |
installer-5-master-2 |
AddedInterface |
Add eth0 [10.129.0.56/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager |
kubelet |
installer-5-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 0/2 pods have been updated to the latest generation and 2/2 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/2 pods have been updated to the latest generation and 0/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/2 pods have been updated to the latest generation and 0/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-5-master-2 |
Created |
Created container: installer | |
(x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-779749f859-bscv5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:201e8fc1896dadc01ce68cec4c7437f12ddc3ac35792cc4d193242b5c41f48e1" already present on machine |
openshift-kube-controller-manager |
kubelet |
installer-5-master-2 |
Started |
Started container installer | |
openshift-network-operator |
kubelet |
network-operator-854f54f8c9-t6kgz |
BackOff |
Back-off restarting failed container network-operator in pod network-operator-854f54f8c9-t6kgz_openshift-network-operator(eae22243-e292-4623-90b4-dae431cf47dc) | |
(x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-779749f859-bscv5 |
Created |
Created container: config-sync-controllers |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-1_1a0d5265-8ca6-4f1c-bbed-4ff74e88387b became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 4 triggered by "optional configmap/oauth-metadata has been created" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 5 triggered by "required configmap/config has changed" | |
(x2) | openshift-network-operator |
kubelet |
network-operator-854f54f8c9-t6kgz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1656551c63dc1b09263ccc5fb52a13dff12d57e1c7510529789df1b41d253aa9" already present on machine |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 2" to "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 1; 1 node is at revision 2" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 3" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 5; 0 nodes have achieved new revision 6" to "NodeInstallerProgressing: 1 node is at revision 5; 1 node is at revision 6",Available message changed from "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 5; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 5; 1 node is at revision 6" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "master-1" from revision 5 to 6 because static pod is ready | |
(x3) | openshift-network-operator |
kubelet |
network-operator-854f54f8c9-t6kgz |
Created |
Created container: network-operator |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-1_e7a5902f-2196-45e9-884a-5627aaaefb3d became leader | |
(x3) | openshift-network-operator |
kubelet |
network-operator-854f54f8c9-t6kgz |
Started |
Started container network-operator |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-5 -n openshift-kube-apiserver because it was missing | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-x4hbg | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-pqfgv | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "master-2" from revision 5 to 6 because node master-2 with revision 5 is the oldest | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-77674cffc8-k5fvv_5c26bf6d-fa48-4833-a607-19dabc70d691 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-6-master-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-6-master-2 |
Created |
Created container: installer | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-6-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler |
multus |
installer-6-master-2 |
AddedInterface |
Add eth0 [10.129.0.57/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 4" | |
openshift-kube-scheduler |
kubelet |
installer-6-master-2 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-4-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-multus |
replicaset-controller |
multus-admission-controller-6bc7c56dc6 |
SuccessfulCreate |
Created pod: multus-admission-controller-6bc7c56dc6-n46rr | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-6bc7c56dc6 to 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
installer-4-master-1 |
AddedInterface |
Add eth0 [10.128.0.79/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-4-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-4-master-1 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-4-master-1 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-apiserver-master-1 on node master-1" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-5 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   ... // 2 identical entries   "routingConfig": map[string]any{"subdomain": string("apps.ocp.openstack.lab")},   "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")},   "storageConfig": map[string]any{   "urls": []any{ - string("https://192.168.34.10:2379"),   string("https://192.168.34.11:2379"),   string("https://192.168.34.12:2379"),   },   },   } | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.34.11:2379,https://192.168.34.12:2379 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "RevisionControllerDegraded: etcd cluster has quorum of 2 which is not fault tolerant: [{Member:ID:6676704299130470762 name:\"master-1\" peerURLs:\"https://192.168.34.11:2380\" clientURLs:\"https://192.168.34.11:2379\" Healthy:true Took:1.682357ms Error:<nil>} {Member:ID:7706623244043024291 name:\"master-2\" peerURLs:\"https://192.168.34.12:2380\" clientURLs:\"https://192.168.34.12:2379\" Healthy:true Took:2.360236ms Error:<nil>}]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "RevisionControllerDegraded: etcd cluster has quorum of 2 which is not fault tolerant: [{Member:ID:6676704299130470762 name:\"master-1\" peerURLs:\"https://192.168.34.11:2380\" clientURLs:\"https://192.168.34.11:2379\" Healthy:true Took:2.907603ms Error:<nil>} {Member:ID:7706623244043024291 name:\"master-2\" peerURLs:\"https://192.168.34.12:2380\" clientURLs:\"https://192.168.34.12:2379\" Healthy:true Took:4.23311ms Error:<nil>}]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Killing |
Stopping container kube-controller-manager | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
DaemonSetCreated |
Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing | |
openshift-kube-controller-manager |
static-pod-installer |
installer-5-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Killing |
Stopping container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Killing |
Stopping container kube-controller-manager-cert-syncer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-5 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 4, desired generation is 5." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 4, desired generation is 5.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 4, desired generation is 5." | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 4, desired generation is 5.") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 5 triggered by "required configmap/config has changed" | |
(x9) | openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-l7lmp |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 4, desired generation is 5.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 4, desired generation is 5." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 4, desired generation is 5." | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-1 |
CreatedSCCRanges |
created SCC ranges for openshift-network-console namespace | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-56cfb99cfd-9798f became leader | |
openshift-kube-apiserver |
kubelet |
installer-4-master-1 |
Killing |
Stopping container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 5" | |
(x10) | openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-l7lmp |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Created |
Created container: kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-2 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-master-2 on node master-2\nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-master-2 on node master-2\nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-master-2 on node master-2" | |
openshift-kube-apiserver |
multus |
installer-5-master-1 |
AddedInterface |
Add eth0 [10.128.0.80/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-5-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-5-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-5-master-1 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-5-master-1 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Killing |
Stopping container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Killing |
Stopping container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Killing |
Stopping container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
static-pod-installer |
installer-6-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-1_4678acc9-601d-4ef5-b842-e73b373a8003 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-master-2 on node master-2" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a435ee2ec"...)}}, "controllers": []any{ ... // 8 identical elements string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), strings.Join({ - "-", "openshift.io/image-puller-rolebindings", }, ""), string("openshift.io/image-signature-import"), string("openshift.io/image-trigger"), ... // 2 identical elements string("openshift.io/origin-namespace"), string("openshift.io/serviceaccount"), strings.Join({ - "-", "openshift.io/serviceaccount-pull-secrets", }, ""), string("openshift.io/templateinstance"), string("openshift.io/templateinstancefinalizer"), string("openshift.io/unidling"), }, "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ac368a7ef"...)}}, "featureGates": []any{string("BuildCSIVolumes=true")}, "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, } | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 4, desired generation is 5." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available" | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-xvwmq | |
openshift-apiserver |
replicaset-controller |
apiserver-595d5f74d8 |
SuccessfulDelete |
Deleted pod: apiserver-595d5f74d8-ttb94 | |
openshift-network-console |
deployment-controller |
networking-console-plugin |
ScalingReplicaSet |
Scaled up replica set networking-console-plugin-85df6bdd68 to 2 | |
openshift-network-console |
replicaset-controller |
networking-console-plugin-85df6bdd68 |
SuccessfulCreate |
Created pod: networking-console-plugin-85df6bdd68-f5bnc | |
openshift-network-console |
replicaset-controller |
networking-console-plugin-85df6bdd68 |
SuccessfulCreate |
Created pod: networking-console-plugin-85df6bdd68-2dd2d | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-595d5f74d8 to 1 from 2 | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-ttb94 |
Killing |
Stopping container openshift-apiserver | |
default |
node-controller |
master-1 |
RegisteredNode |
Node master-1 event: Registered Node master-1 in Controller | |
openshift-apiserver |
replicaset-controller |
apiserver-5f68d4c887 |
SuccessfulCreate |
Created pod: apiserver-5f68d4c887-s2fvb | |
default |
node-controller |
master-2 |
RegisteredNode |
Node master-2 event: Registered Node master-2 in Controller | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-8fg56 | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-ttb94 |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-5f68d4c887 to 1 from 0 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
ProbeError |
Startup probe error: Get "https://192.168.34.12:10257/healthz": dial tcp 192.168.34.12:10257: connect: connection refused body: | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Unhealthy |
Startup probe failed: Get "https://192.168.34.12:10257/healthz": dial tcp 192.168.34.12:10257: connect: connection refused | |
openshift-controller-manager |
replicaset-controller |
controller-manager-779655cdb |
SuccessfulDelete |
Deleted pod: controller-manager-779655cdb-66gdt | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-76f4d8cd68 to 1 from 0 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-76f4d8cd68 |
SuccessfulCreate |
Created pod: route-controller-manager-76f4d8cd68-t98ml | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-b9c5786fd |
SuccessfulDelete |
Deleted pod: route-controller-manager-b9c5786fd-ttfxr | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-779655cdb to 0 from 1 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-66975b7c4d |
SuccessfulCreate |
Created pod: controller-manager-66975b7c4d-j962d | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-b9c5786fd to 0 from 1 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 2" to "Progressing: deployment/controller-manager: observed generation is 7, desired generation is 8.\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4." | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-66975b7c4d to 1 from 0 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodeObserved |
Observed new master node master-0 |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-dzmh2 | |
(x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodeObserved |
Observed new master node master-0 |
openshift-network-node-identity |
daemonset-controller |
network-node-identity |
SuccessfulCreate |
Created pod: network-node-identity-lfw7t | |
openshift-authentication |
replicaset-controller |
oauth-openshift-6ddc4f49f9 |
SuccessfulCreate |
Created pod: oauth-openshift-6ddc4f49f9-qzlvm | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodesReadyChanged |
The master nodes not ready: node "master-0" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found]) |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-vvmcs | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-84c8b8d745 to 2 from 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodDisruptionBudgetUpdated |
Updated PodDisruptionBudget.policy/kube-apiserver-guard-pdb -n openshift-kube-apiserver because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodDisruptionBudgetUpdated |
Updated PodDisruptionBudget.policy/kube-controller-manager-guard-pdb -n openshift-kube-controller-manager because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes \"master-0\" not found])" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes \"master-0\" not found])" | |
(x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodesReadyChanged |
The master nodes not ready: node "master-0" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found]) |
(x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodeObserved |
Observed new master node master-0 |
(x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodesReadyChanged |
The master nodes not ready: node "master-0" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found]) |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-84c8b8d745 |
SuccessfulCreate |
Created pod: apiserver-84c8b8d745-wnpsp | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-6ddc4f49f9 to 2 from 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/2 pods have been updated to the latest generation and 0/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 4, desired generation is 5.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes \"master-0\" not found])" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 6 triggered by "required configmap/etcd-pod has changed" | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-bvr92 | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-xsrn9 | |
(x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodesReadyChanged |
The master nodes not ready: node "master-0" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found]) |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-gww4z | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-544bd | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-vmk66 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodDisruptionBudgetUpdated |
Updated PodDisruptionBudget.policy/etcd-guard-pdb -n openshift-etcd because it changed | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-p5vjv | |
(x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodeObserved |
Observed new master node master-0 |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-4wx2z | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-7q9jd | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 4, desired generation is 5.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 4, desired generation is 5.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation and 0/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6." | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-5f68d4c887 to 2 from 1 | |
openshift-apiserver |
replicaset-controller |
apiserver-5f68d4c887 |
SuccessfulCreate |
Created pod: apiserver-5f68d4c887-pqcgn | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodDisruptionBudgetUpdated |
Updated PodDisruptionBudget.policy/openshift-kube-scheduler-guard-pdb -n openshift-kube-scheduler because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 5" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-2" from revision 4 to 5 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 4; 1 node is at revision 5" to "NodeInstallerProgressing: 1 node is at revision 0; 2 nodes are at revision 5",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 4; 1 node is at revision 5" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 5" | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-6 -n openshift-etcd because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation and 1/3 pods are available" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 5 because node master-0 static pod not found | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 4, desired generation is 5.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation and 0/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 1/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation and 0/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 5; 1 node is at revision 6" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 5; 1 node is at revision 6",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 5; 1 node is at revision 6" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 5; 1 node is at revision 6" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-5-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodesReadyChanged |
The master nodes not ready: node "master-0" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes \"master-0\" not found])" to "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodesReadyChanged |
The master nodes not ready: node "master-0" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes \"master-0\" not found])" to "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes \"master-0\" not found])" to "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodesReadyChanged |
The master nodes not ready: node "master-0" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodesReadyChanged |
The master nodes not ready: node "master-0" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) | |
(x5) | openshift-etcd |
kubelet |
installer-5-master-0 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-etcd"/"kube-root-ca.crt" not registered |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-66975b7c4d to 2 from 1 | |
(x5) | openshift-etcd |
kubelet |
installer-5-master-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 7, desired generation is 8.\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4." to "Progressing: deployment/controller-manager: observed generation is 8, desired generation is 9.\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-76f4d8cd68 to 2 from 1 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-66975b7c4d |
SuccessfulCreate |
Created pod: controller-manager-66975b7c4d-kl7k6 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-76f4d8cd68 |
SuccessfulCreate |
Created pod: route-controller-manager-76f4d8cd68-bzmnd | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-6-master-0 -n openshift-etcd because it was missing | |
(x7) | openshift-apiserver |
kubelet |
apiserver-595d5f74d8-ttb94 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller |
etcd-operator |
SecretUpdated |
Updated Secret/etcd-all-certs -n openshift-etcd because it changed | |
(x8) | openshift-apiserver |
kubelet |
apiserver-595d5f74d8-ttb94 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
(x4) | openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-pod -n openshift-etcd: cause by changes in data.pod.yaml |
(x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Started |
Started container wait-for-host-port |
(x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine |
(x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Created |
Created container: wait-for-host-port |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 1/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation and 0/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 1/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation and 0/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
(x5) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(117b8efe269c98124cf5022ab3c340a5) |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
static-pod-installer |
installer-5-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 8, desired generation is 9.\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" to "Progressing: deployment/controller-manager: updated replicas is 2, desired replicas is 3\nProgressing: deployment/route-controller-manager: updated replicas is 2, desired replicas is 3" | |
(x6) | openshift-etcd |
kubelet |
installer-6-master-0 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-etcd"/"kube-root-ca.crt" not registered |
(x11) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-master-2 |
Unhealthy |
Readiness probe failed: Get "https://192.168.34.12:10259/healthz": dial tcp 192.168.34.12:10259: connect: connection refused |
(x10) | openshift-etcd |
kubelet |
installer-6-master-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
(x12) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-master-2 |
ProbeError |
Readiness probe error: Get "https://192.168.34.12:10259/healthz": dial tcp 192.168.34.12:10259: connect: connection refused body: |
(x7) | openshift-kube-controller-manager |
kubelet |
installer-5-master-0 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered |
(x14) | openshift-catalogd |
kubelet |
catalogd-controller-manager-596f9d8bbf-wn7c6 |
FailedMount |
MountVolume.SetUp failed for volume "etc-docker" : hostPath type check failed: /etc/docker is not a directory |
(x14) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-668cb7cdc8-lwlfz |
FailedMount |
MountVolume.SetUp failed for volume "etc-docker" : hostPath type check failed: /etc/docker is not a directory |
(x18) | openshift-kube-controller-manager |
kubelet |
installer-5-master-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-route-controller-manager |
kubelet |
route-controller-manager-77674cffc8-k5fvv |
ProbeError |
Liveness probe error: Get "https://10.128.0.68:8443/healthz": dial tcp 10.128.0.68:8443: connect: connection refused body: | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-77674cffc8-k5fvv |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.68:8443/healthz": dial tcp 10.128.0.68:8443: connect: connection refused | |
(x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine |
(x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Started |
Started container kube-rbac-proxy-crio |
(x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Created |
Created container: kube-rbac-proxy-crio |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Created |
Created container: kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-2 |
Created |
Created container: kube-scheduler-cert-syncer | |
(x5) | openshift-etcd |
kubelet |
installer-7-master-0 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-etcd"/"kube-root-ca.crt" not registered |
(x5) | openshift-etcd |
kubelet |
installer-7-master-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
(x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-77674cffc8-k5fvv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08" already present on machine |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-77674cffc8-k5fvv_6002cf53-29ca-4cdd-b3c8-0c0c7d351c3d became leader | |
(x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-77674cffc8-k5fvv |
Created |
Created container: route-controller-manager |
(x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-77674cffc8-k5fvv |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.68:8443/healthz": dial tcp 10.128.0.68:8443: connect: connection refused |
(x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-77674cffc8-k5fvv |
ProbeError |
Readiness probe error: Get "https://10.128.0.68:8443/healthz": dial tcp 10.128.0.68:8443: connect: connection refused body: |
(x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-77674cffc8-k5fvv |
Started |
Started container route-controller-manager |
(x9) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-master-1 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed |
openshift-cloud-controller-manager-operator |
master-2_c7377241-0954-47f5-8946-cfe1588cea19 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-2_c7377241-0954-47f5-8946-cfe1588cea19 became leader | |
(x7) | openshift-etcd |
kubelet |
installer-8-master-0 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-etcd"/"kube-root-ca.crt" not registered |
(x18) | openshift-etcd |
kubelet |
installer-8-master-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-image-registry |
kubelet |
node-ca-xvwmq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:033c253ddc49271d2affc9841208ba0a36a902d5cf00eae4873bae24715622d2" | |
openshift-dns |
kubelet |
node-resolver-544bd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a" | |
openshift-multus |
kubelet |
multus-bvr92 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e" | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-7q9jd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-image-registry |
kubelet |
node-ca-4wx2z |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:033c253ddc49271d2affc9841208ba0a36a902d5cf00eae4873bae24715622d2" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-vvmcs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca9272c8bbbde3ffdea2887c91dfb5ec4b09de7a8e2ae03aa5a47f56ff41e326" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbde693d384ae08cdaf9126a9a6359bb5515793f63108ef216cbddf1c995af3e" | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-1_3ea8428e-8af5-4bb3-99db-7552001af70b became leader | |
openshift-network-node-identity |
kubelet |
network-node-identity-lfw7t |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-7q9jd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-7q9jd |
Created |
Created container: machine-config-daemon | |
openshift-oauth-apiserver |
multus |
apiserver-84c8b8d745-p4css |
AddedInterface |
Add eth0 [10.129.0.58/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-7q9jd |
Started |
Started container machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-7q9jd |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-7q9jd |
Created |
Created container: kube-rbac-proxy | |
openshift-image-registry |
kubelet |
node-ca-8fg56 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:033c253ddc49271d2affc9841208ba0a36a902d5cf00eae4873bae24715622d2" | |
openshift-console-operator |
multus |
console-operator-6768b5f5f9-6l8p6 |
AddedInterface |
Add eth0 [10.129.0.63/23] from ovn-kubernetes | |
openshift-network-console |
kubelet |
networking-console-plugin-85df6bdd68-2dd2d |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8afd9235675d5dc97f1aa8680f0d4b4801d7a8aa7e503cb938d588d522933c79" | |
openshift-multus |
kubelet |
multus-admission-controller-6bc7c56dc6-n46rr |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-6bc7c56dc6-n46rr |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-6bc7c56dc6-n46rr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20340db1108fda428a7abee6193330945c70ad69148f122a7f32a889047c8003" already present on machine | |
openshift-multus |
multus |
multus-admission-controller-6bc7c56dc6-n46rr |
AddedInterface |
Add eth0 [10.129.0.65/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-p4css |
Started |
Started container fix-audit-permissions | |
openshift-multus |
kubelet |
multus-admission-controller-6bc7c56dc6-n46rr |
Created |
Created container: kube-rbac-proxy | |
openshift-authentication |
multus |
oauth-openshift-6ddc4f49f9-thnnf |
AddedInterface |
Add eth0 [10.128.0.81/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-6bc7c56dc6-n46rr |
Started |
Started container kube-rbac-proxy | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-p4css |
Created |
Created container: fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-p4css |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-authentication |
kubelet |
oauth-openshift-6ddc4f49f9-thnnf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ed5799bca3f9d73de168cbc978de18a72d2cb6bd22149c4fc813fb7b9a971f5a" | |
openshift-network-console |
kubelet |
networking-console-plugin-85df6bdd68-f5bnc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8afd9235675d5dc97f1aa8680f0d4b4801d7a8aa7e503cb938d588d522933c79" | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-p4css |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-network-console |
multus |
networking-console-plugin-85df6bdd68-f5bnc |
AddedInterface |
Add eth0 [10.128.0.83/23] from ovn-kubernetes | |
openshift-route-controller-manager |
multus |
route-controller-manager-76f4d8cd68-t98ml |
AddedInterface |
Add eth0 [10.129.0.60/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-76f4d8cd68-t98ml |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08" already present on machine | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-76f4d8cd68-t98ml |
Created |
Created container: route-controller-manager | |
openshift-monitoring |
kubelet |
metrics-server-76c4979bdc-gds6w |
Started |
Started container metrics-server | |
openshift-network-console |
multus |
networking-console-plugin-85df6bdd68-2dd2d |
AddedInterface |
Add eth0 [10.129.0.66/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
node-exporter-gww4z |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66366501aac86a6d898d235d0b96dbe7679a2e142e8c615524f0bdc3ddd68b21" | |
openshift-monitoring |
kubelet |
node-exporter-gww4z |
FailedToRetrieveImagePullSecret |
Unable to retrieve some image pull secrets (node-exporter-dockercfg-fk8vl); attempting to pull the image may not succeed. | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-76f4d8cd68-t98ml |
Started |
Started container route-controller-manager | |
openshift-multus |
kubelet |
multus-admission-controller-6bc7c56dc6-n46rr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-authentication |
kubelet |
oauth-openshift-55df5b4c9d-k6sz4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ed5799bca3f9d73de168cbc978de18a72d2cb6bd22149c4fc813fb7b9a971f5a" | |
openshift-controller-manager |
kubelet |
controller-manager-66975b7c4d-j962d |
Started |
Started container controller-manager | |
openshift-monitoring |
kubelet |
monitoring-plugin-75bcf9f5fd-xkw2l |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84adcf9faa58ecd3baf5d7406e6ccc4f83a83c1b6d67dc4e188311d780221650" | |
openshift-monitoring |
multus |
monitoring-plugin-75bcf9f5fd-xkw2l |
AddedInterface |
Add eth0 [10.128.0.82/23] from ovn-kubernetes | |
openshift-console-operator |
kubelet |
console-operator-6768b5f5f9-6l8p6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74b33e795f6f701c4d5fa1ff8b9cb18dd9b0c239f3d0c7c68565f6ba9c846bd" | |
openshift-controller-manager |
kubelet |
controller-manager-66975b7c4d-j962d |
Created |
Created container: controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-66975b7c4d-j962d |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-controller-manager |
multus |
controller-manager-66975b7c4d-j962d |
AddedInterface |
Add eth0 [10.129.0.59/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
monitoring-plugin-75bcf9f5fd-5f2qh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84adcf9faa58ecd3baf5d7406e6ccc4f83a83c1b6d67dc4e188311d780221650" | |
openshift-monitoring |
multus |
monitoring-plugin-75bcf9f5fd-5f2qh |
AddedInterface |
Add eth0 [10.129.0.64/23] from ovn-kubernetes | |
openshift-authentication |
multus |
oauth-openshift-55df5b4c9d-k6sz4 |
AddedInterface |
Add eth0 [10.129.0.62/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
metrics-server-76c4979bdc-gds6w |
AddedInterface |
Add eth0 [10.129.0.61/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
metrics-server-76c4979bdc-gds6w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8586795f9801090b8f01a74743474c41b5987eefc3a9b2c58f937098a1704f" already present on machine | |
openshift-monitoring |
kubelet |
metrics-server-76c4979bdc-gds6w |
Created |
Created container: metrics-server | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-1c6c5fbd3d52010f02d09a7fd9bcec0a | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-1c6c5fbd3d52010f02d09a7fd9bcec0a | |
openshift-multus |
replicaset-controller |
multus-admission-controller-6bc7c56dc6 |
SuccessfulCreate |
Created pod: multus-admission-controller-6bc7c56dc6-4dpkm | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-p4css |
Created |
Created container: oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-p4css |
Started |
Started container oauth-apiserver | |
openshift-multus |
kubelet |
multus-admission-controller-7b6b7bb859-vrzvk |
Killing |
Stopping container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-7b6b7bb859-vrzvk |
Killing |
Stopping container kube-rbac-proxy | |
openshift-multus |
replicaset-controller |
multus-admission-controller-7b6b7bb859 |
SuccessfulDelete |
Deleted pod: multus-admission-controller-7b6b7bb859-vrzvk | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-7b6b7bb859 to 1 from 2 | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-6bc7c56dc6 to 2 from 1 | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/state=Done | |
openshift-image-registry |
kubelet |
node-ca-8fg56 |
Created |
Created container: node-ca | |
openshift-network-console |
kubelet |
networking-console-plugin-85df6bdd68-2dd2d |
Started |
Started container networking-console-plugin | |
openshift-console-operator |
kubelet |
console-operator-6768b5f5f9-6l8p6 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74b33e795f6f701c4d5fa1ff8b9cb18dd9b0c239f3d0c7c68565f6ba9c846bd" in 3.166s (3.166s including waiting). Image size: 505275807 bytes. | |
openshift-monitoring |
kubelet |
monitoring-plugin-75bcf9f5fd-xkw2l |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84adcf9faa58ecd3baf5d7406e6ccc4f83a83c1b6d67dc4e188311d780221650" in 2.999s (2.999s including waiting). Image size: 440842752 bytes. | |
openshift-monitoring |
kubelet |
monitoring-plugin-75bcf9f5fd-xkw2l |
Created |
Created container: monitoring-plugin | |
openshift-console-operator |
kubelet |
console-operator-6768b5f5f9-6l8p6 |
Started |
Started container console-operator | |
openshift-monitoring |
kubelet |
monitoring-plugin-75bcf9f5fd-xkw2l |
Started |
Started container monitoring-plugin | |
openshift-image-registry |
kubelet |
node-ca-xvwmq |
Created |
Created container: node-ca | |
openshift-image-registry |
kubelet |
node-ca-xvwmq |
Started |
Started container node-ca | |
openshift-image-registry |
kubelet |
node-ca-8fg56 |
Started |
Started container node-ca | |
openshift-network-console |
kubelet |
networking-console-plugin-85df6bdd68-2dd2d |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8afd9235675d5dc97f1aa8680f0d4b4801d7a8aa7e503cb938d588d522933c79" in 3.001s (3.001s including waiting). Image size: 439442953 bytes. | |
openshift-authentication |
kubelet |
oauth-openshift-55df5b4c9d-k6sz4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ed5799bca3f9d73de168cbc978de18a72d2cb6bd22149c4fc813fb7b9a971f5a" in 3.125s (3.125s including waiting). Image size: 474495494 bytes. | |
openshift-network-console |
kubelet |
networking-console-plugin-85df6bdd68-2dd2d |
Created |
Created container: networking-console-plugin | |
openshift-console-operator |
kubelet |
console-operator-6768b5f5f9-6l8p6 |
Created |
Created container: console-operator | |
openshift-authentication |
kubelet |
oauth-openshift-55df5b4c9d-k6sz4 |
Created |
Created container: oauth-openshift | |
openshift-image-registry |
kubelet |
node-ca-xvwmq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:033c253ddc49271d2affc9841208ba0a36a902d5cf00eae4873bae24715622d2" in 3.437s (3.437s including waiting). Image size: 483543768 bytes. | |
openshift-multus |
multus |
multus-admission-controller-6bc7c56dc6-4dpkm |
AddedInterface |
Add eth0 [10.128.0.84/23] from ovn-kubernetes | |
openshift-network-console |
kubelet |
networking-console-plugin-85df6bdd68-f5bnc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8afd9235675d5dc97f1aa8680f0d4b4801d7a8aa7e503cb938d588d522933c79" in 2.913s (2.913s including waiting). Image size: 439442953 bytes. | |
openshift-network-console |
kubelet |
networking-console-plugin-85df6bdd68-f5bnc |
Created |
Created container: networking-console-plugin | |
openshift-network-console |
kubelet |
networking-console-plugin-85df6bdd68-f5bnc |
Started |
Started container networking-console-plugin | |
openshift-authentication |
kubelet |
oauth-openshift-6ddc4f49f9-thnnf |
Started |
Started container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-6ddc4f49f9-thnnf |
Created |
Created container: oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-6ddc4f49f9-thnnf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ed5799bca3f9d73de168cbc978de18a72d2cb6bd22149c4fc813fb7b9a971f5a" in 3.07s (3.07s including waiting). Image size: 474495494 bytes. | |
openshift-multus |
kubelet |
multus-admission-controller-6bc7c56dc6-4dpkm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20340db1108fda428a7abee6193330945c70ad69148f122a7f32a889047c8003" already present on machine | |
openshift-image-registry |
kubelet |
node-ca-8fg56 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:033c253ddc49271d2affc9841208ba0a36a902d5cf00eae4873bae24715622d2" in 3.521s (3.521s including waiting). Image size: 483543768 bytes. | |
openshift-authentication |
kubelet |
oauth-openshift-55df5b4c9d-k6sz4 |
Started |
Started container oauth-openshift | |
openshift-console |
replicaset-controller |
downloads-65bb9777fc |
SuccessfulCreate |
Created pod: downloads-65bb9777fc-bm4pw | |
openshift-multus |
kubelet |
multus-admission-controller-7b6b7bb859-m8s2b |
Killing |
Stopping container kube-rbac-proxy | |
openshift-authentication |
replicaset-controller |
oauth-openshift-55df5b4c9d |
SuccessfulCreate |
Created pod: oauth-openshift-55df5b4c9d-wpbsb | |
(x2) | openshift-console |
controllermanager |
console |
NoPods |
No matching pods found |
openshift-multus |
kubelet |
multus-admission-controller-6bc7c56dc6-4dpkm |
Created |
Created container: multus-admission-controller | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-multus |
kubelet |
multus-admission-controller-6bc7c56dc6-4dpkm |
Started |
Started container multus-admission-controller | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-multus |
kubelet |
multus-admission-controller-6bc7c56dc6-4dpkm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-6bc7c56dc6-4dpkm |
Created |
Created container: kube-rbac-proxy | |
openshift-authentication |
replicaset-controller |
oauth-openshift-6ddc4f49f9 |
SuccessfulDelete |
Deleted pod: oauth-openshift-6ddc4f49f9-qzlvm | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-multus |
kubelet |
multus-admission-controller-6bc7c56dc6-4dpkm |
Started |
Started container kube-rbac-proxy | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.182.42:443/healthz\": dial tcp 172.30.182.42:443: connect: connection refused" | |
openshift-console-operator |
console-operator-health-check-controller-healthcheckcontroller |
console-operator |
FastControllerResync |
Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from True to False ("All is well") | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-6ddc4f49f9 to 1 from 2 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-55df5b4c9d to 2 from 1 | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-7b6b7bb859 to 0 from 1 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-7b6b7bb859 |
SuccessfulDelete |
Deleted pod: multus-admission-controller-7b6b7bb859-m8s2b | |
openshift-console-operator |
console-operator |
console-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
kubelet |
multus-admission-controller-7b6b7bb859-m8s2b |
Killing |
Stopping container multus-admission-controller | |
(x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorVersionChanged |
clusteroperator/console version "operator" changed from "" to "4.18.25" |
openshift-console-operator |
console-operator-console-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/console -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentCreated |
Created Deployment.apps/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded set to False ("All is well"),Progressing set to False ("All is well"),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.18.25"}] | |
openshift-console-operator |
console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-console |
replicaset-controller |
downloads-65bb9777fc |
SuccessfulCreate |
Created pod: downloads-65bb9777fc-sd822 | |
(x2) | openshift-console |
controllermanager |
downloads |
NoPods |
No matching pods found |
openshift-console |
deployment-controller |
downloads |
ScalingReplicaSet |
Scaled up replica set downloads-65bb9777fc to 2 | |
openshift-console-operator |
console-operator |
console-operator-lock |
LeaderElection |
console-operator-6768b5f5f9-6l8p6_a2c4d4bc-3914-4722-9989-95ef0c28090f became leader | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/console -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/default-ingress-cert -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/downloads -n openshift-console because it was missing | |
openshift-console |
kubelet |
downloads-65bb9777fc-sd822 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76058284378b0037d8c37e800ff8d9c8bec379904010e912e2e2b6414bc6bb7f" | |
openshift-console |
multus |
downloads-65bb9777fc-sd822 |
AddedInterface |
Add eth0 [10.129.0.67/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 5; 1 node is at revision 6" to "NodeInstallerProgressing: 1 node is at revision 0; 2 nodes are at revision 6",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 5; 1 node is at revision 6" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 6" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "master-2" from revision 5 to 6 because static pod is ready | |
openshift-console |
kubelet |
downloads-65bb9777fc-bm4pw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76058284378b0037d8c37e800ff8d9c8bec379904010e912e2e2b6414bc6bb7f" | |
openshift-console |
multus |
downloads-65bb9777fc-bm4pw |
AddedInterface |
Add eth0 [10.128.0.85/23] from ovn-kubernetes | |
openshift-apiserver |
multus |
apiserver-5f68d4c887-s2fvb |
AddedInterface |
Add eth0 [10.129.0.68/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
monitoring-plugin-75bcf9f5fd-5f2qh |
Created |
Created container: monitoring-plugin | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-s2fvb |
Started |
Started container fix-audit-permissions | |
openshift-monitoring |
kubelet |
monitoring-plugin-75bcf9f5fd-5f2qh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84adcf9faa58ecd3baf5d7406e6ccc4f83a83c1b6d67dc4e188311d780221650" in 5.768s (5.768s including waiting). Image size: 440842752 bytes. | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-s2fvb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine | |
openshift-monitoring |
kubelet |
monitoring-plugin-75bcf9f5fd-5f2qh |
Started |
Started container monitoring-plugin | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "OAuthClientsControllerDegraded: secret \"console-oauth-config\" not found" | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-config -n openshift-console because it was missing | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-s2fvb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine | |
openshift-console-operator |
console-operator-oauthclient-secret-controller-oauthclientsecretcontroller |
console-operator |
SecretCreated |
Created Secret/console-oauth-config -n openshift-console because it was missing | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-s2fvb |
Created |
Created container: fix-audit-permissions | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-668956f9dd to 2 | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-public -n openshift-config-managed because it was missing | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-s2fvb |
Started |
Started container openshift-apiserver | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 6 because node master-0 static pod not found | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-s2fvb |
Created |
Created container: openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-s2fvb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-s2fvb |
Created |
Created container: openshift-apiserver-check-endpoints | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentCreated |
Created Deployment.apps/console -n openshift-console because it was missing | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-s2fvb |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveConsoleURL |
assetPublicURL changed from to https://console-openshift-console.apps.ocp.openstack.lab | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\n-\u00a0\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.ocp.openstack.lab\"),\n\u00a0\u00a0\t\t\"loginURL\": string(\"https://api.ocp.openstack.lab:6443\"),\n\u00a0\u00a0\t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n\u00a0\u00a0\t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.ocp.openstack.lab\")}}}},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n" | |
openshift-console |
replicaset-controller |
console-668956f9dd |
SuccessfulCreate |
Created pod: console-668956f9dd-mlrd8 | |
openshift-console |
replicaset-controller |
console-668956f9dd |
SuccessfulCreate |
Created pod: console-668956f9dd-llkhv | |
openshift-console |
multus |
console-668956f9dd-mlrd8 |
AddedInterface |
Add eth0 [10.128.0.86/23] from ovn-kubernetes | |
openshift-console |
multus |
console-668956f9dd-llkhv |
AddedInterface |
Add eth0 [10.129.0.69/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-668956f9dd-mlrd8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-console |
kubelet |
console-668956f9dd-llkhv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-prunecontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/revision-pruner-6-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/downloads -n openshift-console because it changed | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 1/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation and 0/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation and 0/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-6-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation and 0/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available changed from Unknown to False ("DeploymentAvailable: 0 replicas available for console deployment") | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation and 1/3 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
(x2) | openshift-monitoring |
controllermanager |
alertmanager-main |
NoPods |
No matching pods found |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-prunecontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/revision-pruner-6-master-1 -n openshift-kube-scheduler because it was missing | |
openshift-console |
kubelet |
console-668956f9dd-mlrd8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" in 3.96s (3.96s including waiting). Image size: 626969044 bytes. | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.129.0.70/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-master-1 |
Started |
Started container pruner | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-grpc-tls-8otna1nr4bh0o -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-console |
kubelet |
console-668956f9dd-mlrd8 |
Created |
Created container: console | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine | |
openshift-console |
replicaset-controller |
console-554dc689f9 |
SuccessfulCreate |
Created pod: console-554dc689f9-rnmmd | |
openshift-console |
kubelet |
console-668956f9dd-mlrd8 |
Started |
Started container console | |
openshift-console |
kubelet |
console-668956f9dd-llkhv |
Created |
Created container: console | |
openshift-console |
replicaset-controller |
console-668956f9dd |
SuccessfulDelete |
Deleted pod: console-668956f9dd-llkhv | |
openshift-console |
kubelet |
console-668956f9dd-llkhv |
Started |
Started container console | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
replicaset-controller |
thanos-querier-cc99494f6 |
SuccessfulCreate |
Created pod: thanos-querier-cc99494f6-kmmxc | |
openshift-monitoring |
replicaset-controller |
thanos-querier-cc99494f6 |
SuccessfulCreate |
Created pod: thanos-querier-cc99494f6-ds5gd | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-master-1 |
Created |
Created container: pruner | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler |
multus |
revision-pruner-6-master-1 |
AddedInterface |
Add eth0 [10.128.0.87/23] from ovn-kubernetes | |
openshift-console |
replicaset-controller |
console-554dc689f9 |
SuccessfulCreate |
Created pod: console-554dc689f9-c5k9h | |
openshift-monitoring |
multus |
alertmanager-main-1 |
AddedInterface |
Add eth0 [10.128.0.88/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-554dc689f9 to 2 | |
openshift-console |
kubelet |
console-668956f9dd-llkhv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" in 4.196s (4.196s including waiting). Image size: 626969044 bytes. | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-668956f9dd to 1 from 2 | |
openshift-monitoring |
deployment-controller |
thanos-querier |
ScalingReplicaSet |
Scaled up replica set thanos-querier-cc99494f6 to 2 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
multus |
thanos-querier-cc99494f6-ds5gd |
AddedInterface |
Add eth0 [10.128.0.89/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:755d2dc7bc83f2e1c10e6a0a70dd9acdd6bc282ad4ae973794d262a785e9f6d6" | |
(x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a435ee2ec"...)}}, "controllers": []any{ ... // 8 identical elements string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), strings.Join({ + "-", "openshift.io/image-puller-rolebindings", }, ""), string("openshift.io/image-signature-import"), string("openshift.io/image-trigger"), ... // 2 identical elements string("openshift.io/origin-namespace"), string("openshift.io/serviceaccount"), strings.Join({ + "-", "openshift.io/serviceaccount-pull-secrets", }, ""), string("openshift.io/templateinstance"), string("openshift.io/templateinstancefinalizer"), string("openshift.io/unidling"), }, "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ac368a7ef"...)}}, "featureGates": []any{string("BuildCSIVolumes=true")}, "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, } |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-console |
kubelet |
console-668956f9dd-llkhv |
Killing |
Stopping container console | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d24c4db9f6f0e9fb8ffdf9dd2b08101c37316b989e6709d13783e7d6d3baef73" | |
openshift-monitoring |
multus |
thanos-querier-cc99494f6-kmmxc |
AddedInterface |
Add eth0 [10.129.0.71/23] from ovn-kubernetes | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d24c4db9f6f0e9fb8ffdf9dd2b08101c37316b989e6709d13783e7d6d3baef73" | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:755d2dc7bc83f2e1c10e6a0a70dd9acdd6bc282ad4ae973794d262a785e9f6d6" in 1.986s (1.986s including waiting). Image size: 460575314 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" in 1.514s (1.514s including waiting). Image size: 430951015 bytes. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-prunecontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/revision-pruner-6-master-2 -n openshift-kube-scheduler because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:755d2dc7bc83f2e1c10e6a0a70dd9acdd6bc282ad4ae973794d262a785e9f6d6" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine | |
openshift-kube-scheduler |
multus |
revision-pruner-6-master-2 |
AddedInterface |
Add eth0 [10.129.0.72/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-lfw7t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-lfw7t |
Created |
Created container: webhook | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-vvmcs |
Started |
Started container tuned | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Started |
Started container egress-router-binary-copy | |
openshift-network-node-identity |
kubelet |
network-node-identity-lfw7t |
Started |
Started container webhook | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Created |
Created container: egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbde693d384ae08cdaf9126a9a6359bb5515793f63108ef216cbddf1c995af3e" in 16.653s (16.653s including waiting). Image size: 530836538 bytes. | |
openshift-monitoring |
kubelet |
node-exporter-gww4z |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66366501aac86a6d898d235d0b96dbe7679a2e142e8c615524f0bdc3ddd68b21" in 15.644s (15.644s including waiting). Image size: 410753681 bytes. | |
openshift-monitoring |
kubelet |
node-exporter-gww4z |
Created |
Created container: init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-gww4z |
Started |
Started container init-textfile | |
openshift-multus |
kubelet |
multus-bvr92 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e" in 16.703s (16.703s including waiting). Image size: 1230574268 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-master-2 |
Created |
Created container: pruner | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-lfw7t |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" in 16.769s (16.769s including waiting). Image size: 1565215279 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Created |
Created container: kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" in 16.861s (16.861s including waiting). Image size: 1565215279 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Started |
Started container kubecfg-setup | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-vvmcs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca9272c8bbbde3ffdea2887c91dfb5ec4b09de7a8e2ae03aa5a47f56ff41e326" in 16.721s (16.721s including waiting). Image size: 681716323 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-image-registry |
kubelet |
node-ca-4wx2z |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:033c253ddc49271d2affc9841208ba0a36a902d5cf00eae4873bae24715622d2" in 16.542s (16.542s including waiting). Image size: 483543768 bytes. | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-vvmcs |
Created |
Created container: tuned | |
openshift-dns |
kubelet |
node-resolver-544bd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a" in 16.734s (16.734s including waiting). Image size: 575181628 bytes. | |
openshift-dns |
kubelet |
node-resolver-544bd |
Created |
Created container: dns-node-resolver | |
openshift-image-registry |
kubelet |
node-ca-4wx2z |
Created |
Created container: node-ca | |
openshift-image-registry |
kubelet |
node-ca-4wx2z |
Started |
Started container node-ca | |
openshift-network-node-identity |
kubelet |
network-node-identity-lfw7t |
Created |
Created container: approver | |
openshift-multus |
kubelet |
multus-bvr92 |
Started |
Started container kube-multus | |
openshift-network-node-identity |
kubelet |
network-node-identity-lfw7t |
Started |
Started container approver | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-master-2 |
Started |
Started container pruner | |
openshift-dns |
kubelet |
node-resolver-544bd |
Started |
Started container dns-node-resolver | |
openshift-multus |
kubelet |
multus-bvr92 |
Created |
Created container: kube-multus | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6128c3fda0a374e4e705551260ee45b426a747e9d3e450d4ca1a3714fd404207" | |
openshift-monitoring |
kubelet |
node-exporter-gww4z |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-gww4z |
Created |
Created container: node-exporter | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: secret \"console-oauth-config\" not found" to "OAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f170d009db78c5df2e61a5de6cbd57283366bb46168eea3b0cca5f005bbf59" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
node-exporter-gww4z |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66366501aac86a6d898d235d0b96dbe7679a2e142e8c615524f0bdc3ddd68b21" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-gww4z |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-gww4z |
Started |
Started container kube-rbac-proxy | |
(x2) | openshift-monitoring |
controllermanager |
prometheus-k8s |
NoPods |
No matching pods found |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d24c4db9f6f0e9fb8ffdf9dd2b08101c37316b989e6709d13783e7d6d3baef73" in 3.901s (3.901s including waiting). Image size: 495748313 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Created |
Created container: thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Started |
Started container thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Started |
Started container thanos-query | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Started |
Started container ovn-controller | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Created |
Created container: thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d24c4db9f6f0e9fb8ffdf9dd2b08101c37316b989e6709d13783e7d6d3baef73" in 4.114s (4.114s including waiting). Image size: 495748313 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
ConfigMapUpdated |
Updated ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication: cause by changes in data.v4-0-config-system-cliconfig | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:755d2dc7bc83f2e1c10e6a0a70dd9acdd6bc282ad4ae973794d262a785e9f6d6" in 3.001s (3.001s including waiting). Image size: 460575314 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-grpc-tls-8klgi7r2728qp -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
node-exporter-gww4z |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container config-reloader | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Created |
Created container: ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Created |
Created container: ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Started |
Started container kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Created |
Created container: northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Created |
Created container: nbdb | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f170d009db78c5df2e61a5de6cbd57283366bb46168eea3b0cca5f005bbf59" | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container: kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Started |
Started container nbdb | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f170d009db78c5df2e61a5de6cbd57283366bb46168eea3b0cca5f005bbf59" in 1.013s (1.013s including waiting). Image size: 406142487 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f170d009db78c5df2e61a5de6cbd57283366bb46168eea3b0cca5f005bbf59" | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f170d009db78c5df2e61a5de6cbd57283366bb46168eea3b0cca5f005bbf59" | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f170d009db78c5df2e61a5de6cbd57283366bb46168eea3b0cca5f005bbf59" in 990ms (990ms including waiting). Image size: 406142487 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f170d009db78c5df2e61a5de6cbd57283366bb46168eea3b0cca5f005bbf59" in 809ms (809ms including waiting). Image size: 406142487 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Started |
Started container kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Created |
Created container: kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Started |
Started container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Created |
Created container: kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
(x3) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml |
(x3) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml |
(x6) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-ds5gd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f170d009db78c5df2e61a5de6cbd57283366bb46168eea3b0cca5f005bbf59" in 1.266s (1.266s including waiting). Image size: 406142487 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Started |
Started container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Created |
Created container: kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Created |
Created container: kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-cc99494f6-kmmxc |
Started |
Started container kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b7ea005d75360221e268ef4a671bd1a5eb15acc98b32c7c716176ad5b6cd73d" | |
(x6) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.129.0.73/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine | |
openshift-monitoring |
multus |
prometheus-k8s-1 |
AddedInterface |
Add eth0 [10.128.0.90/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-marketplace |
kubelet |
community-operators-thpzb |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-r4wbf |
Started |
Started container extract-utilities | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-55df5b4c9d to 1 from 2 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-65687bc9c8 to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-7968c6c999 to 1 from 0 | |
(x5) | openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed |
openshift-controller-manager |
replicaset-controller |
controller-manager-78c5d9fccd |
SuccessfulCreate |
Created pod: controller-manager-78c5d9fccd-5xwlt | |
openshift-authentication |
replicaset-controller |
oauth-openshift-65687bc9c8 |
SuccessfulCreate |
Created pod: oauth-openshift-65687bc9c8-w9j4s | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-76f4d8cd68 to 1 from 2 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7968c6c999 |
SuccessfulCreate |
Created pod: route-controller-manager-7968c6c999-tlxv6 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-76f4d8cd68 |
SuccessfulDelete |
Deleted pod: route-controller-manager-76f4d8cd68-bzmnd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Started |
Started container sbdb | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Created |
Created container: sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-xsrn9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: updated replicas is 2, desired replicas is 3\nProgressing: deployment/route-controller-manager: updated replicas is 2, desired replicas is 3" to "Progressing: deployment/controller-manager: observed generation is 9, desired generation is 10.\nProgressing: deployment/controller-manager: updated replicas is 2, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 7, desired generation is 8.\nProgressing: deployment/route-controller-manager: updated replicas is 2, desired replicas is 3\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 4, desired generation is 5." | |
openshift-authentication |
replicaset-controller |
oauth-openshift-55df5b4c9d |
SuccessfulDelete |
Deleted pod: oauth-openshift-55df5b4c9d-wpbsb | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b7ea005d75360221e268ef4a671bd1a5eb15acc98b32c7c716176ad5b6cd73d" | |
openshift-controller-manager |
replicaset-controller |
controller-manager-66975b7c4d |
SuccessfulDelete |
Deleted pod: controller-manager-66975b7c4d-kl7k6 | |
openshift-marketplace |
kubelet |
community-operators-thpzb |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-thpzb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
multus |
community-operators-thpzb |
AddedInterface |
Add eth0 [10.129.0.75/23] from ovn-kubernetes | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-78c5d9fccd to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-66975b7c4d to 1 from 2 | |
openshift-marketplace |
multus |
certified-operators-r4wbf |
AddedInterface |
Add eth0 [10.129.0.74/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-r4wbf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-r4wbf |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-thpzb |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-thpzb |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-r4wbf |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 630ms (630ms including waiting). Image size: 1199160216 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-r4wbf |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-r4wbf |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-r4wbf |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-thpzb |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-thpzb |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 701ms (701ms including waiting). Image size: 1181047702 bytes. | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-1c6c5fbd3d52010f02d09a7fd9bcec0a | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
NodeDone |
Setting node master-0, currentConfig rendered-master-1c6c5fbd3d52010f02d09a7fd9bcec0a to Done | |
openshift-marketplace |
multus |
redhat-marketplace-7dljg |
AddedInterface |
Add eth0 [10.129.0.76/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Uncordon |
Update completed for config rendered-master-1c6c5fbd3d52010f02d09a7fd9bcec0a and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/reason= | |
openshift-marketplace |
kubelet |
certified-operators-r4wbf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Created |
Created container: cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6128c3fda0a374e4e705551260ee45b426a747e9d3e450d4ca1a3714fd404207" in 5.517s (5.517s including waiting). Image size: 684971018 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c713df8493f490d2cd316861e6f63bc27078cda759dd9dd2817f101f233db28" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/thanos-querier-pdb -n openshift-monitoring because it was missing | |
default |
ovnkube-csr-approver-controller |
csr-llhjf |
CSRApproved |
CSR "csr-llhjf" has been approved | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Started |
Started container bond-cni-plugin | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 6 triggered by "required secret/service-account-private-key has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdated |
Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Created |
Created container: bond-cni-plugin | |
(x6) | openshift-kube-scheduler |
kubelet |
revision-pruner-6-master-0 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "All is well" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c713df8493f490d2cd316861e6f63bc27078cda759dd9dd2817f101f233db28" in 1.051s (1.051s including waiting). Image size: 404610285 bytes. | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-6 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
community-operators-thpzb |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-thpzb |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dljg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-r4wbf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 3.194s (3.194s including waiting). Image size: 911296197 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-r4wbf |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
certified-operators-r4wbf |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dljg |
Created |
Created container: extract-utilities | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b7ea005d75360221e268ef4a671bd1a5eb15acc98b32c7c716176ad5b6cd73d" in 5.295s (5.295s including waiting). Image size: 598741346 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)") | |
openshift-marketplace |
kubelet |
community-operators-thpzb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b95ed8eaa90077acc5910504a338c0b5eea8a9b6632868366d72d48a4b6f2c4" | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dljg |
Started |
Started container extract-utilities | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-marketplace |
kubelet |
community-operators-thpzb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 430ms (430ms including waiting). Image size: 911296197 bytes. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from False to True ("NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)") | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d24c4db9f6f0e9fb8ffdf9dd2b08101c37316b989e6709d13783e7d6d3baef73" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)") | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-marketplace |
multus |
redhat-operators-hh4tw |
AddedInterface |
Add eth0 [10.129.0.77/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-hh4tw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-hh4tw |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-hh4tw |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 579ms (579ms including waiting). Image size: 1629241735 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-hh4tw |
Created |
Created container: extract-content | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)" to "GuardControllerDegraded: Missing PodIP in operand kube-apiserver-master-1 on node master-1\nNodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
redhat-operators-hh4tw |
Created |
Created container: extract-utilities | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-marketplace |
kubelet |
redhat-operators-hh4tw |
Started |
Started container extract-utilities | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Started |
Started container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Created |
Created container: routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b95ed8eaa90077acc5910504a338c0b5eea8a9b6632868366d72d48a4b6f2c4" in 840ms (840ms including waiting). Image size: 400384094 bytes. | |
(x6) | openshift-kube-scheduler |
kubelet |
installer-6-master-0 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dljg |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dljg |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dljg |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 730ms (730ms including waiting). Image size: 1057212814 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb" | |
openshift-marketplace |
kubelet |
redhat-operators-hh4tw |
Started |
Started container extract-content | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.25, 0 replicas available" | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dljg |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dljg |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dljg |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dljg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dljg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 470ms (470ms including waiting). Image size: 911296197 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-6 -n openshift-kube-controller-manager because it was missing | |
default |
ovnkube-csr-approver-controller |
csr-5v5jp |
CSRApproved |
CSR "csr-5v5jp" has been approved | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-6 -n openshift-kube-controller-manager because it was missing | |
(x7) | openshift-network-diagnostics |
kubelet |
network-check-target-vmk66 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-kvcqq" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
(x18) | openshift-network-diagnostics |
kubelet |
network-check-target-vmk66 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
(x7) | openshift-multus |
kubelet |
network-metrics-daemon-p5vjv |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-6 -n openshift-kube-controller-manager because it was missing | |
(x18) | openshift-multus |
kubelet |
network-metrics-daemon-p5vjv |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-marketplace |
kubelet |
certified-operators-r4wbf |
Killing |
Stopping container registry-server | |
(x13) | openshift-kube-scheduler |
kubelet |
installer-6-master-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: Missing PodIP in operand kube-apiserver-master-1 on node master-1\nNodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)" to "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-14 13:20:12 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-6 -n openshift-kube-controller-manager because it was missing | |
(x15) | openshift-kube-scheduler |
kubelet |
revision-pruner-6-master-0 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-marketplace |
kubelet |
community-operators-thpzb |
Killing |
Stopping container registry-server | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-6 -n openshift-kube-controller-manager because it was missing | |
openshift-console |
kubelet |
downloads-65bb9777fc-bm4pw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76058284378b0037d8c37e800ff8d9c8bec379904010e912e2e2b6414bc6bb7f" in 29.637s (29.637s including waiting). Image size: 2888816073 bytes. | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-76f4d8cd68-t98ml_b5a46a98-ce78-49cf-8ed5-67d14fd4f9a9 became leader | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.ocp.openstack.lab returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.ocp.openstack.lab returns '503 Service Unavailable'" | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b7ea005d75360221e268ef4a671bd1a5eb15acc98b32c7c716176ad5b6cd73d" in 15.072s (15.072s including waiting). Image size: 598741346 bytes. | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container prometheus | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container thanos-sidecar | |
openshift-console |
kubelet |
downloads-65bb9777fc-bm4pw |
Started |
Started container download-server | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d24c4db9f6f0e9fb8ffdf9dd2b08101c37316b989e6709d13783e7d6d3baef73" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container: thanos-sidecar | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container setup | |
openshift-console |
kubelet |
downloads-65bb9777fc-bm4pw |
Created |
Created container: download-server | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: setup | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container kube-rbac-proxy-web | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-6 -n openshift-kube-controller-manager because it was missing | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container kube-rbac-proxy-thanos | |
(x2) | openshift-console |
kubelet |
downloads-65bb9777fc-bm4pw |
ProbeError |
Readiness probe error: Get "http://10.128.0.85:8080/": dial tcp 10.128.0.85:8080: connect: connection refused body: |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dljg |
Killing |
Stopping container registry-server | |
(x2) | openshift-console |
kubelet |
downloads-65bb9777fc-bm4pw |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.85:8080/": dial tcp 10.128.0.85:8080: connect: connection refused |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 6 triggered by "required secret/service-account-private-key has changed" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb" in 10.961s (10.961s including waiting). Image size: 869140966 bytes. | |
(x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Created |
Created container: whereabouts-cni-bincopy | |
(x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check failed | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-marketplace |
kubelet |
community-operators-thpzb |
Unhealthy |
Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of a24099ad7badd1258d3aa393520ac4b97128c5c19905baf28485e28594f56133 is running failed: container process not found | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Started |
Started container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Created |
Created container: whereabouts-cni | |
openshift-marketplace |
kubelet |
certified-operators-r4wbf |
Unhealthy |
Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of 4c2c242407cf6d6fc6dc1cd76cf7393fb785eeb9caf34c198139065fca8e2326 is running failed: container process not found | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-dzmh2 |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-kube-scheduler |
multus |
revision-pruner-6-master-0 |
AddedInterface |
Add eth0 [10.130.0.6/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
multus |
installer-5-master-0 |
AddedInterface |
Add eth0 [10.130.0.7/23] from ovn-kubernetes | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-server |
SuccessfulCreate |
Created pod: machine-config-server-xcgtf | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodesReadyChanged |
All master nodes are ready |
openshift-marketplace |
kubelet |
redhat-operators-hh4tw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-master-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" | |
openshift-authentication |
taint-eviction-controller |
oauth-openshift-65687bc9c8-w9j4s |
TaintManagerEviction |
Cancelling deletion of Pod openshift-authentication/oauth-openshift-65687bc9c8-w9j4s | |
openshift-ingress-canary |
daemonset-controller |
ingress-canary |
SuccessfulCreate |
Created pod: ingress-canary-fkqvb | |
openshift-route-controller-manager |
taint-eviction-controller |
route-controller-manager-7968c6c999-tlxv6 |
TaintManagerEviction |
Cancelling deletion of Pod openshift-route-controller-manager/route-controller-manager-7968c6c999-tlxv6 | |
openshift-kube-scheduler |
multus |
installer-6-master-0 |
AddedInterface |
Add eth0 [10.130.0.5/23] from ovn-kubernetes | |
openshift-controller-manager |
taint-eviction-controller |
controller-manager-78c5d9fccd-5xwlt |
TaintManagerEviction |
Cancelling deletion of Pod openshift-controller-manager/controller-manager-78c5d9fccd-5xwlt | |
openshift-authentication |
multus |
oauth-openshift-65687bc9c8-w9j4s |
AddedInterface |
Add eth0 [10.130.0.9/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-master-0 |
Created |
Created container: pruner | |
openshift-marketplace |
kubelet |
redhat-operators-hh4tw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 3.774s (3.774s including waiting). Image size: 911296197 bytes. | |
openshift-kube-scheduler |
kubelet |
installer-6-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-0" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-0" | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-master-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" in 2.244s (2.244s including waiting). Image size: 499422833 bytes. | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-6qp6p | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-master-0 |
Started |
Started container pruner | |
openshift-monitoring |
kubelet |
metrics-server-8475fbcb68-p4n8s |
Killing |
Stopping container metrics-server | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled down replica set metrics-server-8475fbcb68 to 0 from 1 | |
openshift-monitoring |
replicaset-controller |
metrics-server-8475fbcb68 |
SuccessfulDelete |
Deleted pod: metrics-server-8475fbcb68-p4n8s | |
openshift-marketplace |
kubelet |
redhat-operators-hh4tw |
Created |
Created container: registry-server | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-hkmb5 | |
openshift-console |
kubelet |
downloads-65bb9777fc-sd822 |
Created |
Created container: download-server | |
openshift-marketplace |
kubelet |
redhat-operators-hh4tw |
Started |
Started container registry-server | |
openshift-kube-scheduler |
kubelet |
installer-6-master-0 |
Started |
Started container installer | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7968c6c999-tlxv6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-0" | |
openshift-ingress-canary |
multus |
ingress-canary-fkqvb |
AddedInterface |
Add eth0 [10.130.0.12/23] from ovn-kubernetes | |
openshift-controller-manager |
multus |
controller-manager-78c5d9fccd-5xwlt |
AddedInterface |
Add eth0 [10.130.0.11/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-78c5d9fccd-5xwlt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 9, desired generation is 10.\nProgressing: deployment/controller-manager: updated replicas is 2, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 7, desired generation is 8.\nProgressing: deployment/route-controller-manager: updated replicas is 2, desired replicas is 3\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 4, desired generation is 5." to "Progressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" | |
openshift-network-operator |
kubelet |
iptables-alerter-hkmb5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 2 nodes are at revision 5" to "NodeInstallerProgressing: 1 node is at revision 0; 2 nodes are at revision 5; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 5" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 5; 0 nodes have achieved new revision 6" | |
openshift-authentication |
kubelet |
oauth-openshift-65687bc9c8-w9j4s |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ed5799bca3f9d73de168cbc978de18a72d2cb6bd22149c4fc813fb7b9a971f5a" | |
openshift-console |
kubelet |
downloads-65bb9777fc-sd822 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76058284378b0037d8c37e800ff8d9c8bec379904010e912e2e2b6414bc6bb7f" in 39.865s (39.865s including waiting). Image size: 2888816073 bytes. | |
openshift-kube-scheduler |
kubelet |
installer-6-master-0 |
Created |
Created container: installer | |
openshift-console |
kubelet |
downloads-65bb9777fc-sd822 |
Started |
Started container download-server | |
openshift-route-controller-manager |
multus |
route-controller-manager-7968c6c999-tlxv6 |
AddedInterface |
Add eth0 [10.130.0.10/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-server-xcgtf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-server-xcgtf |
Created |
Created container: machine-config-server | |
openshift-dns |
multus |
dns-default-6qp6p |
AddedInterface |
Add eth0 [10.130.0.13/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-server-xcgtf |
Started |
Started container machine-config-server | |
openshift-dns |
kubelet |
dns-default-6qp6p |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:def4bc41ba62687d8c9a68b6f74c39240f651ec7a039a78a6535233581f430a7" | |
openshift-console |
multus |
console-554dc689f9-c5k9h |
AddedInterface |
Add eth0 [10.129.0.78/23] from ovn-kubernetes | |
openshift-ingress-canary |
kubelet |
ingress-canary-fkqvb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:805f1bf09553ecf2e9d735c881539c011947eee7bf4c977b074e2d0396b9d99a" | |
(x3) | openshift-console |
kubelet |
downloads-65bb9777fc-sd822 |
ProbeError |
Readiness probe error: Get "http://10.129.0.67:8080/": dial tcp 10.129.0.67:8080: connect: connection refused body: |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-6-master-0 -n openshift-kube-controller-manager because it was missing | |
(x4) | openshift-console |
kubelet |
console-668956f9dd-mlrd8 |
ProbeError |
Startup probe error: Get "https://10.128.0.86:8443/health": dial tcp 10.128.0.86:8443: connect: connection refused body: |
(x4) | openshift-console |
kubelet |
console-668956f9dd-mlrd8 |
Unhealthy |
Startup probe failed: Get "https://10.128.0.86:8443/health": dial tcp 10.128.0.86:8443: connect: connection refused |
openshift-console |
kubelet |
console-554dc689f9-c5k9h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" already present on machine | |
openshift-console |
kubelet |
console-554dc689f9-c5k9h |
Created |
Created container: console | |
openshift-console |
kubelet |
console-554dc689f9-c5k9h |
Started |
Started container console | |
(x3) | openshift-console |
kubelet |
downloads-65bb9777fc-sd822 |
Unhealthy |
Readiness probe failed: Get "http://10.129.0.67:8080/": dial tcp 10.129.0.67:8080: connect: connection refused |
openshift-network-operator |
kubelet |
iptables-alerter-hkmb5 |
Created |
Created container: iptables-alerter | |
(x4) | openshift-route-controller-manager |
kubelet |
route-controller-manager-77674cffc8-k5fvv |
BackOff |
Back-off restarting failed container route-controller-manager in pod route-controller-manager-77674cffc8-k5fvv_openshift-route-controller-manager(e4c8f12e-4b62-49eb-a466-af75a571c62f) |
openshift-network-operator |
kubelet |
iptables-alerter-hkmb5 |
Started |
Started container iptables-alerter | |
openshift-ingress-canary |
kubelet |
ingress-canary-fkqvb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:805f1bf09553ecf2e9d735c881539c011947eee7bf4c977b074e2d0396b9d99a" in 3.666s (3.666s including waiting). Image size: 504222816 bytes. | |
(x3) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 |
openshift-dns |
kubelet |
dns-default-6qp6p |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:def4bc41ba62687d8c9a68b6f74c39240f651ec7a039a78a6535233581f430a7" in 3.693s (3.693s including waiting). Image size: 477215701 bytes. | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7968c6c999-tlxv6 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08" in 4.73s (4.73s including waiting). Image size: 480132757 bytes. | |
openshift-console |
kubelet |
console-554dc689f9-rnmmd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" | |
openshift-authentication |
kubelet |
oauth-openshift-65687bc9c8-w9j4s |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ed5799bca3f9d73de168cbc978de18a72d2cb6bd22149c4fc813fb7b9a971f5a" in 4.688s (4.688s including waiting). Image size: 474495494 bytes. | |
openshift-console |
multus |
console-554dc689f9-rnmmd |
AddedInterface |
Add eth0 [10.130.0.14/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-78c5d9fccd-5xwlt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6" in 4.727s (4.727s including waiting). Image size: 551247630 bytes. | |
openshift-kube-controller-manager |
multus |
installer-6-master-0 |
AddedInterface |
Add eth0 [10.130.0.15/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7968c6c999-tlxv6 |
Started |
Started container route-controller-manager | |
openshift-authentication |
kubelet |
oauth-openshift-65687bc9c8-w9j4s |
Started |
Started container oauth-openshift | |
openshift-ingress-canary |
kubelet |
ingress-canary-fkqvb |
Started |
Started container serve-healthcheck-canary | |
openshift-controller-manager |
kubelet |
controller-manager-78c5d9fccd-5xwlt |
Started |
Started container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-78c5d9fccd-5xwlt |
Created |
Created container: controller-manager | |
openshift-authentication |
kubelet |
oauth-openshift-65687bc9c8-w9j4s |
Created |
Created container: oauth-openshift | |
openshift-ingress-canary |
kubelet |
ingress-canary-fkqvb |
Created |
Created container: serve-healthcheck-canary | |
openshift-dns |
kubelet |
dns-default-6qp6p |
Created |
Created container: dns | |
openshift-dns |
kubelet |
dns-default-6qp6p |
Started |
Started container kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-6qp6p |
Created |
Created container: kube-rbac-proxy | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7968c6c999-tlxv6 |
Created |
Created container: route-controller-manager | |
openshift-dns |
kubelet |
dns-default-6qp6p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-dns |
kubelet |
dns-default-6qp6p |
Started |
Started container dns | |
openshift-apiserver |
multus |
apiserver-5f68d4c887-pqcgn |
AddedInterface |
Add eth0 [10.130.0.16/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
multus |
apiserver-84c8b8d745-wnpsp |
AddedInterface |
Add eth0 [10.130.0.17/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7968c6c999-tlxv6 |
ProbeError |
Readiness probe error: Get "https://10.130.0.10:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openshift-authentication |
kubelet |
oauth-openshift-65687bc9c8-w9j4s |
ProbeError |
Readiness probe error: Get "https://10.130.0.9:6443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openshift-kube-controller-manager |
kubelet |
installer-6-master-0 |
Started |
Started container installer | |
openshift-monitoring |
multus |
metrics-server-76c4979bdc-mgff4 |
AddedInterface |
Add eth0 [10.130.0.18/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-wnpsp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7968c6c999-tlxv6 |
Unhealthy |
Readiness probe failed: Get "https://10.130.0.10:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-pqcgn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" | |
openshift-kube-controller-manager |
kubelet |
installer-6-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-6-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-authentication |
kubelet |
oauth-openshift-65687bc9c8-w9j4s |
Unhealthy |
Readiness probe failed: Get "https://10.130.0.9:6443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
openshift-console |
kubelet |
console-554dc689f9-rnmmd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" in 3.337s (3.337s including waiting). Image size: 626969044 bytes. | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-console |
kubelet |
console-554dc689f9-rnmmd |
Created |
Created container: console | |
openshift-console |
kubelet |
console-554dc689f9-rnmmd |
Started |
Started container console | |
openshift-monitoring |
kubelet |
metrics-server-76c4979bdc-mgff4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8586795f9801090b8f01a74743474c41b5987eefc3a9b2c58f937098a1704f" | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-65687bc9c8 to 2 from 1 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-78c5d9fccd to 2 from 1 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-55df5b4c9d |
SuccessfulDelete |
Deleted pod: oauth-openshift-55df5b4c9d-k6sz4 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-65687bc9c8 |
SuccessfulCreate |
Created pod: oauth-openshift-65687bc9c8-twgxt | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-77674cffc8 to 0 from 1 | |
openshift-controller-manager |
kubelet |
controller-manager-56cfb99cfd-9798f |
Killing |
Stopping container controller-manager | |
openshift-controller-manager |
replicaset-controller |
controller-manager-56cfb99cfd |
SuccessfulDelete |
Deleted pod: controller-manager-56cfb99cfd-9798f | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-56cfb99cfd to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-7968c6c999 to 2 from 1 | |
(x3) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check failed |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-55df5b4c9d to 0 from 1 | |
openshift-authentication |
kubelet |
oauth-openshift-55df5b4c9d-k6sz4 |
Killing |
Stopping container oauth-openshift | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-77674cffc8 |
SuccessfulDelete |
Deleted pod: route-controller-manager-77674cffc8-k5fvv | |
openshift-controller-manager |
replicaset-controller |
controller-manager-78c5d9fccd |
SuccessfulCreate |
Created pod: controller-manager-78c5d9fccd-2lzk5 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7968c6c999 |
SuccessfulCreate |
Created pod: route-controller-manager-7968c6c999-vcjcn | |
openshift-console |
kubelet |
console-554dc689f9-c5k9h |
Unhealthy |
Startup probe failed: Get "https://10.129.0.78:8443/health": dial tcp 10.129.0.78:8443: connect: connection refused | |
openshift-console |
kubelet |
console-554dc689f9-c5k9h |
ProbeError |
Startup probe error: Get "https://10.129.0.78:8443/health": dial tcp 10.129.0.78:8443: connect: connection refused body: | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
KubeAPIReadyz |
readyz=true | |
openshift-console |
kubelet |
console-554dc689f9-rnmmd |
ProbeError |
Startup probe error: Get "https://10.130.0.14:8443/health": dial tcp 10.130.0.14:8443: connect: connection refused body: | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-pqcgn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" in 4.61s (4.61s including waiting). Image size: 582409947 bytes. | |
openshift-console |
kubelet |
console-554dc689f9-rnmmd |
Unhealthy |
Startup probe failed: Get "https://10.130.0.14:8443/health": dial tcp 10.130.0.14:8443: connect: connection refused | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-monitoring |
kubelet |
metrics-server-76c4979bdc-mgff4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8586795f9801090b8f01a74743474c41b5987eefc3a9b2c58f937098a1704f" in 3.166s (3.166s including waiting). Image size: 464468268 bytes. | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-wnpsp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-wnpsp |
Started |
Started container fix-audit-permissions | |
openshift-monitoring |
kubelet |
metrics-server-76c4979bdc-mgff4 |
Started |
Started container metrics-server | |
openshift-marketplace |
kubelet |
redhat-operators-hh4tw |
Killing |
Stopping container registry-server | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-wnpsp |
Created |
Created container: fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-pqcgn |
Created |
Created container: fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-pqcgn |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-pqcgn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine | |
openshift-monitoring |
kubelet |
metrics-server-76c4979bdc-mgff4 |
Created |
Created container: metrics-server | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-wnpsp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" in 4.601s (4.601s including waiting). Image size: 498371692 bytes. | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-pqcgn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-wnpsp |
Started |
Started container oauth-apiserver | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-pqcgn |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-pqcgn |
Created |
Created container: openshift-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-wnpsp |
Created |
Created container: oauth-apiserver | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-pqcgn |
Created |
Created container: openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-pqcgn |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-pqcgn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" in 1.962s (1.962s including waiting). Image size: 508004341 bytes. | |
openshift-route-controller-manager |
multus |
route-controller-manager-7968c6c999-vcjcn |
AddedInterface |
Add eth0 [10.128.0.91/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-pqcgn |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok livez check failed | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7968c6c999-vcjcn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08" already present on machine | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7968c6c999-vcjcn |
Started |
Started container route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7968c6c999-vcjcn |
Created |
Created container: route-controller-manager | |
openshift-controller-manager |
multus |
controller-manager-78c5d9fccd-2lzk5 |
AddedInterface |
Add eth0 [10.128.0.92/23] from ovn-kubernetes | |
openshift-multus |
multus |
network-metrics-daemon-p5vjv |
AddedInterface |
Add eth0 [10.130.0.8/23] from ovn-kubernetes | |
openshift-network-diagnostics |
multus |
network-check-target-vmk66 |
AddedInterface |
Add eth0 [10.130.0.3/23] from ovn-kubernetes | |
(x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-7968c6c999-vcjcn |
ProbeError |
Readiness probe error: Get "https://10.128.0.91:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
(x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-7968c6c999-vcjcn |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.91:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
(x2) | openshift-apiserver |
kubelet |
apiserver-5f68d4c887-pqcgn |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-pqcgn |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok livez check failed | |
openshift-controller-manager |
kubelet |
controller-manager-78c5d9fccd-2lzk5 |
Created |
Created container: controller-manager | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-78c5d9fccd-2lzk5 became leader | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-s9576 |
Killing |
Stopping container oauth-apiserver | |
openshift-controller-manager |
kubelet |
controller-manager-78c5d9fccd-2lzk5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "master-1" from revision 1 to 5 because static pod is ready | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-7b6784d654 |
SuccessfulDelete |
Deleted pod: apiserver-7b6784d654-s9576 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-668956f9dd to 0 from 1 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-84c8b8d745 |
SuccessfulCreate |
Created pod: apiserver-84c8b8d745-j8fqz | |
openshift-console |
replicaset-controller |
console-668956f9dd |
SuccessfulDelete |
Deleted pod: console-668956f9dd-mlrd8 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 2; 1 node is at revision 5",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 2; 1 node is at revision 5" | |
openshift-console |
kubelet |
console-668956f9dd-mlrd8 |
Killing |
Stopping container console | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" to "Progressing: deployment/controller-manager: updated replicas is 2, desired replicas is 3\nProgressing: deployment/route-controller-manager: updated replicas is 2, desired replicas is 3" | |
openshift-controller-manager |
kubelet |
controller-manager-78c5d9fccd-2lzk5 |
Started |
Started container controller-manager | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.ocp.openstack.lab returns '503 Service Unavailable'" to "All is well",Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.25, 0 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.25, 1 replicas available",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.ocp.openstack.lab returns '503 Service Unavailable'" to "RouteHealthAvailable: route not yet available, https://console-openshift-console.apps.ocp.openstack.lab returns '503 Service Unavailable'" | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-7b6784d654 to 0 from 1 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-84c8b8d745 to 3 from 2 | |
openshift-operator-controller |
multus |
operator-controller-controller-manager-668cb7cdc8-lwlfz |
AddedInterface |
Add eth0 [10.128.0.42/23] from ovn-kubernetes | |
openshift-catalogd |
multus |
catalogd-controller-manager-596f9d8bbf-wn7c6 |
AddedInterface |
Add eth0 [10.128.0.41/23] from ovn-kubernetes | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-596f9d8bbf-wn7c6 |
Created |
Created container: kube-rbac-proxy | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-596f9d8bbf-wn7c6 |
Created |
Created container: manager | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-596f9d8bbf-wn7c6 |
Started |
Started container manager | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-668cb7cdc8-lwlfz |
Started |
Started container kube-rbac-proxy | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-668cb7cdc8-lwlfz |
Created |
Created container: kube-rbac-proxy | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-668cb7cdc8-lwlfz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94bcc0ff0f9ec7df4aeb53fe4bf0310e26cb7b40bdf772efc95a7ccfcfe69721" already present on machine | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-668cb7cdc8-lwlfz |
Created |
Created container: manager | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-596f9d8bbf-wn7c6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-668cb7cdc8-lwlfz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-668cb7cdc8-lwlfz |
Started |
Started container manager | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-596f9d8bbf-wn7c6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76a6a279901a441ec7d5e67c384c86cd72feaa38e08365ec1eed45fb11b5099f" already present on machine | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-66975b7c4d to 0 from 1 | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-596f9d8bbf-wn7c6 |
Started |
Started container kube-rbac-proxy | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-78c5d9fccd to 3 from 2 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-66975b7c4d |
SuccessfulDelete |
Deleted pod: controller-manager-66975b7c4d-j962d | |
openshift-controller-manager |
kubelet |
controller-manager-66975b7c4d-j962d |
Unhealthy |
Readiness probe failed: Get "https://10.129.0.59:8443/healthz": dial tcp 10.129.0.59:8443: connect: connection refused | |
openshift-operator-controller |
operator-controller-controller-manager-668cb7cdc8-lwlfz_ce7e7efd-88b8-4057-ac4a-637f2e607d16 |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-668cb7cdc8-lwlfz_ce7e7efd-88b8-4057-ac4a-637f2e607d16 became leader | |
openshift-controller-manager |
kubelet |
controller-manager-66975b7c4d-j962d |
ProbeError |
Readiness probe error: Get "https://10.129.0.59:8443/healthz": dial tcp 10.129.0.59:8443: connect: connection refused body: | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-catalogd |
catalogd-controller-manager-596f9d8bbf-wn7c6_b901b19a-039e-460b-bec4-949042666bc7 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-596f9d8bbf-wn7c6_b901b19a-039e-460b-bec4-949042666bc7 became leader | |
openshift-controller-manager |
kubelet |
controller-manager-66975b7c4d-j962d |
Killing |
Stopping container controller-manager | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-595d5f74d8 to 0 from 1 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7968c6c999 |
SuccessfulCreate |
Created pod: route-controller-manager-7968c6c999-b54xp | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-76f4d8cd68-t98ml |
Killing |
Stopping container route-controller-manager | |
openshift-controller-manager |
replicaset-controller |
controller-manager-78c5d9fccd |
SuccessfulCreate |
Created pod: controller-manager-78c5d9fccd-pr9sv | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-76f4d8cd68 to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-7968c6c999 to 3 from 2 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-76f4d8cd68 |
SuccessfulDelete |
Deleted pod: route-controller-manager-76f4d8cd68-t98ml | |
openshift-apiserver |
replicaset-controller |
apiserver-595d5f74d8 |
SuccessfulDelete |
Deleted pod: apiserver-595d5f74d8-hck8v | |
openshift-apiserver |
replicaset-controller |
apiserver-5f68d4c887 |
SuccessfulCreate |
Created pod: apiserver-5f68d4c887-j7ckh | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-hck8v |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-5f68d4c887 to 3 from 2 | |
openshift-apiserver |
kubelet |
apiserver-595d5f74d8-hck8v |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-controller-manager |
kubelet |
controller-manager-78c5d9fccd-pr9sv |
Started |
Started container controller-manager | |
openshift-controller-manager |
multus |
controller-manager-78c5d9fccd-pr9sv |
AddedInterface |
Add eth0 [10.129.0.79/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-78c5d9fccd-pr9sv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6" already present on machine | |
openshift-controller-manager |
kubelet |
controller-manager-78c5d9fccd-pr9sv |
Created |
Created container: controller-manager | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-7968c6c999-b54xp_51fb55c4-379a-444c-bdd4-4902ee3e5508 became leader | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7968c6c999-b54xp |
Started |
Started container route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7968c6c999-b54xp |
Created |
Created container: route-controller-manager | |
openshift-route-controller-manager |
multus |
route-controller-manager-7968c6c999-b54xp |
AddedInterface |
Add eth0 [10.129.0.80/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7968c6c999-b54xp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08" already present on machine | |
openshift-kube-scheduler |
static-pod-installer |
installer-6-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-0" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand openshift-kube-scheduler-master-0 on node master-0" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded changed from True to False ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well") | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" in 6.62s (6.62s including waiting). Image size: 945482213 bytes. | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/openshift-kube-scheduler-guard-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-master-0 |
Created |
Created container: guard | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-master-0 |
Started |
Started container guard | |
openshift-kube-scheduler |
multus |
openshift-kube-scheduler-guard-master-0 |
AddedInterface |
Add eth0 [10.130.0.19/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine | |
openshift-etcd |
multus |
installer-8-master-0 |
AddedInterface |
Add eth0 [10.130.0.4/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand openshift-kube-scheduler-master-0 on node master-0" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-authentication |
kubelet |
oauth-openshift-65687bc9c8-twgxt |
Started |
Started container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-65687bc9c8-twgxt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ed5799bca3f9d73de168cbc978de18a72d2cb6bd22149c4fc813fb7b9a971f5a" already present on machine | |
openshift-authentication |
multus |
oauth-openshift-65687bc9c8-twgxt |
AddedInterface |
Add eth0 [10.129.0.81/23] from ovn-kubernetes | |
openshift-authentication |
kubelet |
oauth-openshift-65687bc9c8-twgxt |
Created |
Created container: oauth-openshift | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing | |
openshift-authentication |
replicaset-controller |
oauth-openshift-65687bc9c8 |
SuccessfulCreate |
Created pod: oauth-openshift-65687bc9c8-h4cd4 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation and 2/3 pods are available" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation and 2/3 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-6ddc4f49f9 to 0 from 1 | |
openshift-authentication |
kubelet |
oauth-openshift-6ddc4f49f9-thnnf |
Killing |
Stopping container oauth-openshift | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-65687bc9c8 to 3 from 2 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-6ddc4f49f9 |
SuccessfulDelete |
Deleted pod: oauth-openshift-6ddc4f49f9-thnnf | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodUpdated |
Updated Pod/openshift-kube-scheduler-guard-master-0 -n openshift-kube-scheduler because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 5 because node master-0 static pod not found | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-77d8f866f9 to 2 | |
openshift-console |
replicaset-controller |
console-77d8f866f9 |
SuccessfulCreate |
Created pod: console-77d8f866f9-skvf6 | |
openshift-console |
kubelet |
console-554dc689f9-rnmmd |
Killing |
Stopping container console | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again",Progressing changed from True to False ("All is well") | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-554dc689f9 to 1 from 2 | |
openshift-console |
replicaset-controller |
console-554dc689f9 |
SuccessfulDelete |
Deleted pod: console-554dc689f9-rnmmd | |
openshift-console |
replicaset-controller |
console-77d8f866f9 |
SuccessfulCreate |
Created pod: console-77d8f866f9-8jlq8 | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdateFailed |
Failed to update Deployment.apps/console -n openshift-console: Operation cannot be fulfilled on deployments.apps "console": the object has been modified; please apply your changes to the latest version and try again | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again" to "All is well",Progressing changed from False to True ("SyncLoopRefreshProgressing: working toward version 4.18.25, 1 replicas available") | |
openshift-console |
kubelet |
console-77d8f866f9-skvf6 |
Created |
Created container: console | |
openshift-console |
multus |
console-77d8f866f9-skvf6 |
AddedInterface |
Add eth0 [10.128.0.93/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-77d8f866f9-skvf6 |
Started |
Started container console | |
openshift-console |
kubelet |
console-77d8f866f9-skvf6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" already present on machine | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulDelete |
delete Pod alertmanager-main-1 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Killing |
Stopping container alertmanager | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-5-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
openshift-kube-apiserver |
multus |
installer-5-master-0 |
AddedInterface |
Add eth0 [10.130.0.20/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
static-pod-installer |
installer-6-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
(x2) | openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-1 in StatefulSet alertmanager-main successful |
openshift-monitoring |
multus |
alertmanager-main-1 |
AddedInterface |
Add eth0 [10.130.0.21/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulDelete |
delete Pod prometheus-k8s-1 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Killing |
Stopping container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Killing |
Stopping container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Killing |
Stopping container kube-rbac-proxy-thanos | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-0" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-0\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container init-config-reloader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" in 2.737s (2.737s including waiting). Image size: 430951015 bytes. | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4" in 3.435s (3.435s including waiting). Image size: 498279559 bytes. | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
cert-recovery-controller |
openshift-kube-controller-manager |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:755d2dc7bc83f2e1c10e6a0a70dd9acdd6bc282ad4ae973794d262a785e9f6d6" | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodCreated |
Created Pod/kube-controller-manager-guard-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
(x2) | openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-1 in StatefulSet prometheus-k8s successful |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:755d2dc7bc83f2e1c10e6a0a70dd9acdd6bc282ad4ae973794d262a785e9f6d6" in 1.704s (1.704s including waiting). Image size: 460575314 bytes. | |
(x12) | openshift-monitoring |
kubelet |
metrics-server-8475fbcb68-p4n8s |
FailedMount |
MountVolume.SetUp failed for volume "client-ca-bundle" : secret "metrics-server-2hutru8havafv" not found |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container: alertmanager | |
openshift-kube-controller-manager |
multus |
kube-controller-manager-guard-master-0 |
AddedInterface |
Add eth0 [10.130.0.22/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
prometheus-k8s-1 |
AddedInterface |
Add eth0 [10.130.0.23/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-console |
replicaset-controller |
console-554dc689f9 |
SuccessfulDelete |
Deleted pod: console-554dc689f9-c5k9h | |
openshift-console |
kubelet |
console-554dc689f9-c5k9h |
Killing |
Stopping container console | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container kube-rbac-proxy-web | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-master-0 |
Created |
Created container: guard | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f170d009db78c5df2e61a5de6cbd57283366bb46168eea3b0cca5f005bbf59" | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-master-0 |
Started |
Started container guard | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-554dc689f9 to 0 from 1 | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b7ea005d75360221e268ef4a671bd1a5eb15acc98b32c7c716176ad5b6cd73d" | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f170d009db78c5df2e61a5de6cbd57283366bb46168eea3b0cca5f005bbf59" in 897ms (897ms including waiting). Image size: 406142487 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-0\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b7ea005d75360221e268ef4a671bd1a5eb15acc98b32c7c716176ad5b6cd73d" in 3.423s (3.423s including waiting). Image size: 598741346 bytes. | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d24c4db9f6f0e9fb8ffdf9dd2b08101c37316b989e6709d13783e7d6d3baef73" | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container config-reloader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 3/3 pods have been updated to the latest generation and 2/3 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 3/3 pods have been updated to the latest generation and 2/3 pods are available" | |
(x8) | openshift-apiserver |
kubelet |
apiserver-595d5f74d8-hck8v |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
(x8) | openshift-apiserver |
kubelet |
apiserver-595d5f74d8-hck8v |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d24c4db9f6f0e9fb8ffdf9dd2b08101c37316b989e6709d13783e7d6d3baef73" in 1.848s (1.848s including waiting). Image size: 495748313 bytes. | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container: kube-rbac-proxy-web | |
(x10) | openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-s9576 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
(x10) | openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-s9576 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodUpdated |
Updated Pod/kube-controller-manager-guard-master-0 -n openshift-kube-controller-manager because it changed | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-etcd |
static-pod-installer |
installer-8-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 8 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 6 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 2 nodes are at revision 5; 0 nodes have achieved new revision 6" to "NodeInstallerProgressing: 2 nodes are at revision 5; 1 node is at revision 6",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 5; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 5; 1 node is at revision 6" | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" | |
openshift-authentication |
kubelet |
oauth-openshift-65687bc9c8-h4cd4 |
Started |
Started container oauth-openshift | |
openshift-authentication |
multus |
oauth-openshift-65687bc9c8-h4cd4 |
AddedInterface |
Add eth0 [10.128.0.94/23] from ovn-kubernetes | |
openshift-authentication |
kubelet |
oauth-openshift-65687bc9c8-h4cd4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ed5799bca3f9d73de168cbc978de18a72d2cb6bd22149c4fc813fb7b9a971f5a" already present on machine | |
openshift-authentication |
kubelet |
oauth-openshift-65687bc9c8-h4cd4 |
Created |
Created container: oauth-openshift | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" in 2.747s (2.747s including waiting). Image size: 531186824 bytes. | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: setup | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-resources-copy | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.25"} {"oauth-apiserver" "4.18.25"}] to [{"operator" "4.18.25"} {"oauth-apiserver" "4.18.25"} {"oauth-openshift" "4.18.25_openshift"}] | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-openshift" changed from "" to "4.18.25_openshift" | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-1" from revision 5 to 6 because node master-1 with revision 5 is the oldest | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-6-master-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
multus |
installer-6-master-1 |
AddedInterface |
Add eth0 [10.128.0.95/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-6-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-console |
kubelet |
console-77d8f866f9-8jlq8 |
Started |
Started container console | |
openshift-console |
multus |
console-77d8f866f9-8jlq8 |
AddedInterface |
Add eth0 [10.130.0.24/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-77d8f866f9-8jlq8 |
Created |
Created container: console | |
openshift-console |
kubelet |
console-77d8f866f9-8jlq8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-j8fqz |
Created |
Created container: fix-audit-permissions | |
openshift-kube-controller-manager |
kubelet |
installer-6-master-1 |
Started |
Started container installer | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-j8fqz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container alertmanager | |
openshift-kube-controller-manager |
kubelet |
installer-6-master-1 |
Created |
Created container: installer | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-j8fqz |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-j8fqz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-oauth-apiserver |
multus |
apiserver-84c8b8d745-j8fqz |
AddedInterface |
Add eth0 [10.128.0.96/23] from ovn-kubernetes | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulDelete |
delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container prom-label-proxy | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-j8fqz |
Created |
Created container: oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-j8fqz |
Started |
Started container oauth-apiserver | |
(x2) | openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodCreated |
Created Pod/etcd-guard-master-0 -n openshift-etcd because it was missing | |
openshift-etcd |
kubelet |
etcd-guard-master-0 |
Created |
Created container: guard | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-etcd |
kubelet |
etcd-guard-master-0 |
Started |
Started container guard | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.129.0.82/23] from ovn-kubernetes | |
openshift-etcd |
multus |
etcd-guard-master-0 |
AddedInterface |
Add eth0 [10.130.0.25/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
etcd-guard-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:755d2dc7bc83f2e1c10e6a0a70dd9acdd6bc282ad4ae973794d262a785e9f6d6" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f170d009db78c5df2e61a5de6cbd57283366bb46168eea3b0cca5f005bbf59" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-0" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-apiserver-master-0 on node master-0" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 6"),Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 6" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 6" | |
openshift-kube-apiserver |
static-pod-installer |
installer-5-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 6 because static pod is ready | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
(x5) | openshift-monitoring |
kubelet |
metrics-server-8475fbcb68-p4n8s |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]metric-storage-ready ok [+]metric-informer-sync ok [+]metadata-informer-sync ok [-]shutdown failed: reason withheld readyz check failed |
(x5) | openshift-monitoring |
kubelet |
metrics-server-8475fbcb68-p4n8s |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodUpdated |
Updated Pod/etcd-guard-master-0 -n openshift-etcd because it changed | |
openshift-kube-apiserver |
cert-regeneration-controller |
openshift-kube-apiserver |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:localhost-recovery-client" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "helm-chartrepos-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:scc:restricted-v2" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "console-extensions-reader" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "cluster-admin" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found] | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
(x2) | openshift-etcd |
kubelet |
etcd-guard-master-0 |
Unhealthy |
Readiness probe failed: Get "https://192.168.34.10:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberAddAsLearner |
successfully added new member https://192.168.34.10:2380 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/kube-apiserver-guard-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberPromote |
successfully promoted learner member https://192.168.34.10:2380 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-master-0 |
Created |
Created container: guard | |
openshift-kube-apiserver |
multus |
kube-apiserver-guard-master-0 |
AddedInterface |
Add eth0 [10.130.0.26/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-master-0 |
Started |
Started container guard | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-apiserver-master-0 on node master-0" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodUpdated |
Updated Pod/kube-apiserver-guard-master-0 -n openshift-kube-apiserver because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 2; 1 node is at revision 5" to "NodeInstallerProgressing: 1 node is at revision 2; 2 nodes are at revision 5",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 2; 1 node is at revision 5" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 2; 2 nodes are at revision 5" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 5 because static pod is ready | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Killing |
Stopping container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Killing |
Stopping container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Killing |
Stopping container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
static-pod-installer |
installer-6-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.25, 2 replicas available" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n\u00a0\u00a0\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\t\"etcd-servers\": []any{\n+\u00a0\t\t\tstring(\"https://192.168.34.10:2379\"),\n\u00a0\u00a0\t\t\tstring(\"https://192.168.34.11:2379\"),\n\u00a0\u00a0\t\t\tstring(\"https://192.168.34.12:2379\"),\n\u00a0\u00a0\t\t},\n\u00a0\u00a0\t\t\"tls-cipher-suites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n\u00a0\u00a0\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n" | |
(x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.34.10:2379,https://192.168.34.11:2379,https://192.168.34.12:2379,https://localhost:2379 |
(x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.34.10:2379,https://192.168.34.11:2379,https://192.168.34.12:2379 |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.18f50f443f6f157e | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{ + string("https://192.168.34.10:2379"), string("https://192.168.34.11:2379"), string("https://192.168.34.12:2379"), string("https://localhost:2379"), }, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "goaway-chance": []any{string("0.001")}, ... // 4 identical entries }, "authConfig": map[string]any{"oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/o"...)}, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, ... // 2 identical entries } | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 5, desired generation is 6.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
(x9) | openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 6 triggered by "required configmap/config has changed" | |
(x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-scripts -n openshift-etcd: cause by changes in data.etcd.env |
(x3) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml |
(x7) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-master-1 |
Unhealthy |
Readiness probe failed: Get "https://192.168.34.11:10257/healthz": dial tcp 192.168.34.11:10257: connect: connection refused |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-6 -n openshift-kube-apiserver because it was missing | |
(x7) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-master-1 |
ProbeError |
Readiness probe error: Get "https://192.168.34.11:10257/healthz": dial tcp 192.168.34.11:10257: connect: connection refused body: |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Created |
Created container: kube-controller-manager | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-1_7d2da01f-3109-4c38-9b07-d5a1cb968548 became leader | |
openshift-etcd |
multus |
installer-8-master-1 |
AddedInterface |
Add eth0 [10.128.0.97/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-8-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
kubelet |
installer-8-master-1 |
Created |
Created container: installer | |
openshift-etcd |
kubelet |
installer-8-master-1 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-1_82ff7004-fc15-47e0-9e0e-54f2d0ce4afb became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-1 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-1 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_0d21a593-eb30-4a4e-957e-f89b0a6d6595 became leader | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-7b6784d654 |
SuccessfulCreate |
Created pod: apiserver-7b6784d654-g299n | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 5, desired generation is 6.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
default |
node-controller |
master-1 |
RegisteredNode |
Node master-1 event: Registered Node master-1 in Controller | |
default |
node-controller |
master-2 |
RegisteredNode |
Node master-2 event: Registered Node master-2 in Controller | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-564c479f to 2 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-77d8f866f9 to 1 from 2 | |
openshift-console |
replicaset-controller |
console-564c479f |
SuccessfulCreate |
Created pod: console-564c479f-7bglk | |
openshift-console |
replicaset-controller |
console-564c479f |
SuccessfulCreate |
Created pod: console-564c479f-s9vtn | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-7b6784d654 to 1 from 0 | |
openshift-console |
replicaset-controller |
console-77d8f866f9 |
SuccessfulDelete |
Deleted pod: console-77d8f866f9-8jlq8 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-84c8b8d745 to 2 from 3 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.25, 2 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.25, 1 replicas available" | |
openshift-console |
kubelet |
console-77d8f866f9-8jlq8 |
Killing |
Stopping container console | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   ... // 2 identical entries   "routingConfig": map[string]any{"subdomain": string("apps.ocp.openstack.lab")},   "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")},   "storageConfig": map[string]any{   "urls": []any{ + string("https://192.168.34.10:2379"),   string("https://192.168.34.11:2379"),   string("https://192.168.34.12:2379"),   },   },   } | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-84c8b8d745 |
SuccessfulDelete |
Deleted pod: apiserver-84c8b8d745-j8fqz | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulDelete |
delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-j8fqz |
Killing |
Stopping container oauth-apiserver | |
(x4) | openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-apiserver: cause by changes in data.config.yaml |
(x3) | openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.34.10:2379,https://192.168.34.11:2379,https://192.168.34.12:2379 |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container thanos-sidecar | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-65499f9774 to 1 from 0 | |
openshift-apiserver |
replicaset-controller |
apiserver-65499f9774 |
SuccessfulCreate |
Created pod: apiserver-65499f9774-hhfd6 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6." | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-5f68d4c887 to 2 from 3 | |
openshift-apiserver |
replicaset-controller |
apiserver-5f68d4c887 |
SuccessfulDelete |
Deleted pod: apiserver-5f68d4c887-j7ckh | |
(x5) | openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-apiserver because it changed |
openshift-console |
kubelet |
console-564c479f-7bglk |
Started |
Started container console | |
openshift-console |
kubelet |
console-564c479f-7bglk |
Created |
Created container: console | |
openshift-console |
kubelet |
console-564c479f-7bglk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" already present on machine | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6." | |
openshift-console |
multus |
console-564c479f-7bglk |
AddedInterface |
Add eth0 [10.129.0.83/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-8-master-1 |
Killing |
Stopping container installer | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 5; 1 node is at revision 8" to "NodeInstallerProgressing: 2 nodes are at revision 5; 1 node is at revision 8; 0 nodes have achieved new revision 9",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 5; 1 node is at revision 8\nEtcdMembersAvailable: 2 members are available" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 5; 1 node is at revision 8; 0 nodes have achieved new revision 9\nEtcdMembersAvailable: 2 members are available" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b7ea005d75360221e268ef4a671bd1a5eb15acc98b32c7c716176ad5b6cd73d" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.129.0.84/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d24c4db9f6f0e9fb8ffdf9dd2b08101c37316b989e6709d13783e7d6d3baef73" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7." | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-6 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation and 2/3 pods are available" | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-hhfd6 |
FailedMount |
MountVolume.SetUp failed for volume "audit" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-hhfd6 |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-hhfd6 |
FailedMount |
MountVolume.SetUp failed for volume "encryption-config" : failed to sync secret cache: timed out waiting for the condition | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-hhfd6 |
FailedMount |
MountVolume.SetUp failed for volume "image-import-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-etcd |
kubelet |
installer-9-master-1 |
Created |
Created container: installer | |
openshift-etcd |
multus |
installer-9-master-1 |
AddedInterface |
Add eth0 [10.128.0.99/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-6 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-hhfd6 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-s9wxw" : [failed to fetch token: serviceaccounts "openshift-apiserver-sa" is forbidden: User "system:node:master-1" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'master-1' and this object, failed to sync configmap cache: timed out waiting for the condition] | |
openshift-etcd |
kubelet |
installer-9-master-1 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-9-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-apiserver |
multus |
apiserver-65499f9774-hhfd6 |
AddedInterface |
Add eth0 [10.128.0.98/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-hhfd6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-hhfd6 |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-hhfd6 |
Created |
Created container: fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-hhfd6 |
Created |
Created container: openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-hhfd6 |
Created |
Created container: openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-hhfd6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-6 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-hhfd6 |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-hhfd6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-1" from revision 5 to 6 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 5; 1 node is at revision 6" to "NodeInstallerProgressing: 1 node is at revision 5; 2 nodes are at revision 6",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 5; 1 node is at revision 6" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 5; 2 nodes are at revision 6" | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-hhfd6 |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
kubelet |
installer-9-master-1 |
Killing |
Stopping container installer | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-77d8f866f9 to 0 from 1 | |
openshift-console |
kubelet |
console-77d8f866f9-skvf6 |
Killing |
Stopping container console | |
openshift-console |
replicaset-controller |
console-77d8f866f9 |
SuccessfulDelete |
Deleted pod: console-77d8f866f9-skvf6 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 6 triggered by "required configmap/config has changed" | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-5f68d4c887 to 1 from 2 | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-pqcgn |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-pqcgn |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-apiserver |
replicaset-controller |
apiserver-5f68d4c887 |
SuccessfulDelete |
Deleted pod: apiserver-5f68d4c887-pqcgn | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-65499f9774 to 2 from 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-2" from revision 2 to 5 because node master-2 with revision 2 is the oldest | |
openshift-apiserver |
replicaset-controller |
apiserver-65499f9774 |
SuccessfulCreate |
Created pod: apiserver-65499f9774-d4zpq | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-2" from revision 5 to 6 because node master-2 with revision 5 is the oldest | |
openshift-etcd |
multus |
installer-10-master-1 |
AddedInterface |
Add eth0 [10.128.0.100/23] from ovn-kubernetes | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation and 2/3 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available" | |
openshift-etcd |
kubelet |
installer-10-master-1 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-10-master-1 |
Created |
Created container: installer | |
openshift-etcd |
kubelet |
installer-10-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-6-master-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
multus |
installer-6-master-2 |
AddedInterface |
Add eth0 [10.129.0.85/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-6-master-2 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-6-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-6-master-2 |
Created |
Created container: installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 2; 2 nodes are at revision 5" to "NodeInstallerProgressing: 1 node is at revision 2; 2 nodes are at revision 5; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 2; 2 nodes are at revision 5" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 2; 2 nodes are at revision 5; 0 nodes have achieved new revision 6" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-6-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-6-master-2 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-6-master-2 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-6-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
multus |
installer-6-master-2 |
AddedInterface |
Add eth0 [10.129.0.86/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-564c479f-s9vtn |
Created |
Created container: console | |
openshift-console |
kubelet |
console-564c479f-s9vtn |
Started |
Started container console | |
openshift-console |
kubelet |
console-564c479f-s9vtn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" already present on machine | |
openshift-console |
multus |
console-564c479f-s9vtn |
AddedInterface |
Add eth0 [10.130.0.27/23] from ovn-kubernetes | |
(x4) | openshift-apiserver |
kubelet |
apiserver-5f68d4c887-pqcgn |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
(x4) | openshift-apiserver |
kubelet |
apiserver-5f68d4c887-pqcgn |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
(x9) | openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-j8fqz |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
(x9) | openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-j8fqz |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
(x4) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "All is well" |
(x4) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation and 2/3 pods are available" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation and 2/3 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Killing |
Stopping container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Killing |
Stopping container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Killing |
Stopping container kube-controller-manager | |
openshift-kube-controller-manager |
static-pod-installer |
installer-6-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
openshift-oauth-apiserver |
multus |
apiserver-7b6784d654-g299n |
AddedInterface |
Add eth0 [10.128.0.101/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-g299n |
Created |
Created container: fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-g299n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-g299n |
Created |
Created container: oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-g299n |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-g299n |
Started |
Started container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-g299n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-7b6784d654 to 2 from 1 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-84c8b8d745 to 1 from 2 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-7b6784d654 |
SuccessfulCreate |
Created pod: apiserver-7b6784d654-27mg2 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-84c8b8d745 |
SuccessfulDelete |
Deleted pod: apiserver-84c8b8d745-wnpsp | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-wnpsp |
Killing |
Stopping container oauth-apiserver | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-2 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Created |
Created container: kube-controller-manager | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-2 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Started |
Started container kube-controller-manager | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4" already present on machine | |
openshift-kube-apiserver |
static-pod-installer |
installer-6-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-master-2 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Started |
Started container kube-controller-manager-recovery-controller | |
(x16) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-master-2 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available",Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: need at least 3 kube-apiservers, got 2" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 2" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-2 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " |
openshift-etcd |
kubelet |
etcd-master-1 |
Killing |
Stopping container etcd-rev | |
openshift-etcd |
static-pod-installer |
installer-10-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 10 | |
openshift-etcd |
kubelet |
etcd-master-1 |
Killing |
Stopping container etcdctl | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready" |
(x13) | openshift-etcd |
kubelet |
etcd-guard-master-1 |
Unhealthy |
Readiness probe failed: Get "https://192.168.34.11:9980/readyz": dial tcp 192.168.34.11:9980: connect: connection refused |
(x13) | openshift-etcd |
kubelet |
etcd-guard-master-1 |
ProbeError |
Readiness probe error: Get "https://192.168.34.11:9980/readyz": dial tcp 192.168.34.11:9980: connect: connection refused body: |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
Unhealthy |
Startup probe failed: Get "https://192.168.34.12:10257/healthz": dial tcp 192.168.34.12:10257: connect: connection refused | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-2 |
ProbeError |
Startup probe error: Get "https://192.168.34.12:10257/healthz": dial tcp 192.168.34.12:10257: connect: connection refused body: | |
(x12) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-master-2 |
ProbeError |
Readiness probe error: Get "https://192.168.34.12:10257/healthz": dial tcp 192.168.34.12:10257: connect: connection refused body: |
(x12) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-master-2 |
Unhealthy |
Readiness probe failed: Get "https://192.168.34.12:10257/healthz": dial tcp 192.168.34.12:10257: connect: connection refused |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-2" from revision 5 to 6 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 6"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 5; 2 nodes are at revision 6" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 6" | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-1_0edd0d74-6a6f-43f5-a08e-93044c3f6322 became leader | |
(x8) | openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-wnpsp |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
(x9) | openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-wnpsp |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: setup | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-1 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-1 |
Created |
Created container: etcd-readyz | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-27mg2 |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-27mg2 |
Created |
Created container: fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-27mg2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-oauth-apiserver |
multus |
apiserver-7b6784d654-27mg2 |
AddedInterface |
Add eth0 [10.130.0.28/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-27mg2 |
Created |
Created container: oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-27mg2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-27mg2 |
Started |
Started container oauth-apiserver | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-84c8b8d745 |
SuccessfulDelete |
Deleted pod: apiserver-84c8b8d745-p4css | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-7b6784d654 to 3 from 2 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 3/3 pods are available" | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-84c8b8d745 to 0 from 1 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-7b6784d654 |
SuccessfulCreate |
Created pod: apiserver-7b6784d654-8vpmp | |
openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-p4css |
Killing |
Stopping container oauth-apiserver | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 3/3 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master-1" from revision 5 to 10 because static pod is ready | |
openshift-apiserver |
multus |
apiserver-65499f9774-d4zpq |
AddedInterface |
Add eth0 [10.130.0.29/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-d4zpq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-d4zpq |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-d4zpq |
Created |
Created container: openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-d4zpq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-d4zpq |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-d4zpq |
Created |
Created container: fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-d4zpq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-d4zpq |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-d4zpq |
Created |
Created container: openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-s2fvb |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-5f68d4c887-s2fvb |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver |
replicaset-controller |
apiserver-65499f9774 |
SuccessfulCreate |
Created pod: apiserver-65499f9774-b84hw | |
openshift-apiserver |
replicaset-controller |
apiserver-5f68d4c887 |
SuccessfulDelete |
Deleted pod: apiserver-5f68d4c887-s2fvb | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-65499f9774 to 3 from 2 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-5f68d4c887 to 0 from 1 | |
(x2) | openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available" |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-2 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-2 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-2 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-etcd |
kubelet |
installer-10-master-2 |
Created |
Created container: installer | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-2 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-etcd |
multus |
installer-10-master-2 |
AddedInterface |
Add eth0 [10.129.0.87/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-10-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
kubelet |
installer-10-master-2 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-2 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-2 |
KubeAPIReadyz |
readyz=true | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 2" to "All is well" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available changed from False to True ("All is well") | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
(x9) | openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-p4css |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
(x7) | openshift-apiserver |
kubelet |
apiserver-5f68d4c887-s2fvb |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-etcd |
kubelet |
etcd-master-2 |
Killing |
Stopping container etcd-rev | |
openshift-etcd |
static-pod-installer |
installer-10-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 10 | |
(x10) | openshift-oauth-apiserver |
kubelet |
apiserver-84c8b8d745-p4css |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
openshift-etcd |
kubelet |
etcd-master-2 |
Killing |
Stopping container etcdctl | |
(x8) | openshift-apiserver |
kubelet |
apiserver-5f68d4c887-s2fvb |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
(x10) | openshift-etcd |
kubelet |
etcd-guard-master-2 |
Unhealthy |
Readiness probe failed: Get "https://192.168.34.12:9980/readyz": dial tcp 192.168.34.12:9980: connect: connection refused |
(x10) | openshift-etcd |
kubelet |
etcd-guard-master-2 |
ProbeError |
Readiness probe error: Get "https://192.168.34.12:9980/readyz": dial tcp 192.168.34.12:9980: connect: connection refused body: |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-8vpmp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-oauth-apiserver |
multus |
apiserver-7b6784d654-8vpmp |
AddedInterface |
Add eth0 [10.129.0.88/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-8vpmp |
Created |
Created container: fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-8vpmp |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-8vpmp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-8vpmp |
Created |
Created container: oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-7b6784d654-8vpmp |
Started |
Started container oauth-apiserver | |
(x3) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well") |
openshift-apiserver-operator |
openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
openshift-apiserver-operator |
CustomResourceDefinitionCreateFailed |
Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
kube-apiserver-operator |
CustomResourceDefinitionCreateFailed |
Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "master-2" from revision 2 to 6 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 2; 2 nodes are at revision 5; 0 nodes have achieved new revision 6" to "NodeInstallerProgressing: 2 nodes are at revision 5; 1 node is at revision 6",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 2; 2 nodes are at revision 5; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 5; 1 node is at revision 6" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 5 to 6 because node master-0 with revision 5 is the oldest | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-6-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
multus |
installer-6-master-0 |
AddedInterface |
Add eth0 [10.130.0.30/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: setup | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcd | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-1 |
CreatedSCCRanges |
created SCC ranges for openshift-storage namespace | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-2 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-2 |
Started |
Started container etcd-readyz | |
openshift-marketplace |
job-controller |
4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b72886a |
SuccessfulCreate |
Created pod: 4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b725qlb | |
openshift-marketplace |
kubelet |
4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b725qlb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b725qlb |
Started |
Started container util | |
openshift-marketplace |
kubelet |
4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b725qlb |
Created |
Created container: util | |
openshift-marketplace |
multus |
4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b725qlb |
AddedInterface |
Add eth0 [10.129.0.89/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b725qlb |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:6e809f8393e9c3004a9b8d80eb6ea708c0ab1e124083c481b48c01a359684588" | |
openshift-marketplace |
kubelet |
4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b725qlb |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b725qlb |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b725qlb |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:6e809f8393e9c3004a9b8d80eb6ea708c0ab1e124083c481b48c01a359684588" in 1.592s (1.592s including waiting). Image size: 111519 bytes. | |
openshift-marketplace |
kubelet |
4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b725qlb |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b725qlb |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b725qlb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" already present on machine | |
openshift-marketplace |
job-controller |
4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b72886a |
Completed |
Job completed | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.3 |
RequirementsUnknown |
requirements not yet checked | |
(x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.3 |
RequirementsNotMet |
one or more requirements couldn't be found |
(x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.3 |
InstallSucceeded |
waiting for install components to report healthy |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.3 |
AllRequirementsMet |
all requirements found, attempting install | |
openshift-storage |
replicaset-controller |
lvms-operator-54844bd599 |
SuccessfulCreate |
Created pod: lvms-operator-54844bd599-xsrzw | |
openshift-storage |
deployment-controller |
lvms-operator |
ScalingReplicaSet |
Scaled up replica set lvms-operator-54844bd599 to 1 | |
openshift-storage |
kubelet |
lvms-operator-54844bd599-xsrzw |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:80f5e48600d9b42add118f1325ced01f49d544a9a4824a7da4e3ba805d64371f" | |
openshift-storage |
multus |
lvms-operator-54844bd599-xsrzw |
AddedInterface |
Add eth0 [10.130.0.31/23] from ovn-kubernetes | |
(x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.3 |
InstallWaiting |
installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability. |
openshift-storage |
kubelet |
lvms-operator-54844bd599-xsrzw |
Started |
Started container manager | |
openshift-storage |
kubelet |
lvms-operator-54844bd599-xsrzw |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:80f5e48600d9b42add118f1325ced01f49d544a9a4824a7da4e3ba805d64371f" in 6.112s (6.112s including waiting). Image size: 294806923 bytes. | |
openshift-storage |
kubelet |
lvms-operator-54844bd599-xsrzw |
Created |
Created container: manager | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.3 |
InstallSucceeded |
install strategy completed with no errors | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-prunecontroller |
etcd-operator |
PodCreated |
Created Pod/revision-pruner-10-master-0 -n openshift-etcd because it was missing | |
openshift-etcd |
multus |
revision-pruner-10-master-0 |
AddedInterface |
Add eth0 [10.130.0.32/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
revision-pruner-10-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
kubelet |
revision-pruner-10-master-0 |
Started |
Started container pruner | |
openshift-etcd |
kubelet |
revision-pruner-10-master-0 |
Created |
Created container: pruner | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-prunecontroller |
etcd-operator |
PodCreated |
Created Pod/revision-pruner-10-master-1 -n openshift-etcd because it was missing | |
openshift-etcd |
multus |
revision-pruner-10-master-1 |
AddedInterface |
Add eth0 [10.128.0.102/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-1 |
CreatedSCCRanges |
created SCC ranges for metallb-system namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-1 |
CreatedSCCRanges |
created SCC ranges for cert-manager-operator namespace | |
openshift-etcd |
kubelet |
revision-pruner-10-master-1 |
Started |
Started container pruner | |
openshift-etcd |
kubelet |
revision-pruner-10-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
kubelet |
revision-pruner-10-master-1 |
Created |
Created container: pruner | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-1 |
CreatedSCCRanges |
created SCC ranges for openshift-nmstate namespace | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-fsynccontroller |
etcd-operator |
EtcdLeaderChangeMetrics |
Detected leader change increase of 2.3376957333156705 over 5 minutes on "None"; disk metrics are: etcd-master-0=0.008359,etcd-master-1=0.015040,etcd-master-2=0.055120. Most often this is as a result of inadequate storage or sometimes due to networking issues. | |
openshift-marketplace |
job-controller |
695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb692b16c |
SuccessfulCreate |
Created pod: 695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb699lfcx | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-prunecontroller |
etcd-operator |
PodCreated |
Created Pod/revision-pruner-10-master-2 -n openshift-etcd because it was missing | |
openshift-marketplace |
multus |
695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb699lfcx |
AddedInterface |
Add eth0 [10.129.0.90/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb699lfcx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb699lfcx |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb699lfcx |
Started |
Started container util | |
openshift-etcd |
kubelet |
revision-pruner-10-master-2 |
Started |
Started container pruner | |
openshift-etcd |
kubelet |
revision-pruner-10-master-2 |
Created |
Created container: pruner | |
openshift-etcd |
kubelet |
revision-pruner-10-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
multus |
revision-pruner-10-master-2 |
AddedInterface |
Add eth0 [10.129.0.91/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb699lfcx |
Pulling |
Pulling image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:cc20a2ab116597b080d303196825d7f5c81c6f30268f7866fb2911706efea210" | |
openshift-marketplace |
job-controller |
8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2f2057 |
SuccessfulCreate |
Created pod: 8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ppm5m | |
openshift-marketplace |
kubelet |
8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ppm5m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
static-pod-installer |
installer-6-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-marketplace |
kubelet |
8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ppm5m |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:99f9b512353f2f026874ba29bbaaa7f4245be3fec0508a5e3b6ac7ee09d2ba31" | |
openshift-marketplace |
kubelet |
8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ppm5m |
Started |
Started container util | |
openshift-marketplace |
kubelet |
8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ppm5m |
Created |
Created container: util | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-marketplace |
multus |
8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ppm5m |
AddedInterface |
Add eth0 [10.129.0.92/23] from ovn-kubernetes | |
openshift-marketplace |
job-controller |
fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbc47b |
SuccessfulCreate |
Created pod: fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cqvvdb | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-marketplace |
multus |
fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cqvvdb |
AddedInterface |
Add eth0 [10.129.0.93/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cqvvdb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cqvvdb |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cqvvdb |
Started |
Started container util | |
openshift-marketplace |
kubelet |
fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cqvvdb |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:73aa232967f9abd7ff21c6d9aa7fddcf2b0313d2f08fbaca90167d4ada1d2497" | |
openshift-marketplace |
kubelet |
695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb699lfcx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" already present on machine | |
openshift-marketplace |
kubelet |
695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb699lfcx |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb699lfcx |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb699lfcx |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:cc20a2ab116597b080d303196825d7f5c81c6f30268f7866fb2911706efea210" in 2.67s (2.67s including waiting). Image size: 105899947 bytes. | |
openshift-marketplace |
kubelet |
695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb699lfcx |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb699lfcx |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ppm5m |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:99f9b512353f2f026874ba29bbaaa7f4245be3fec0508a5e3b6ac7ee09d2ba31" in 2.243s (2.243s including waiting). Image size: 328076 bytes. | |
openshift-marketplace |
kubelet |
8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ppm5m |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ppm5m |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cqvvdb |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ppm5m |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cqvvdb |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ppm5m |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cqvvdb |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cqvvdb |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:73aa232967f9abd7ff21c6d9aa7fddcf2b0313d2f08fbaca90167d4ada1d2497" in 1.6s (1.6s including waiting). Image size: 174722 bytes. | |
openshift-marketplace |
kubelet |
8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2ppm5m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" already present on machine | |
openshift-etcd |
multus |
installer-10-master-0 |
AddedInterface |
Add eth0 [10.130.0.33/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-10-master-0 |
Started |
Started container installer | |
openshift-marketplace |
kubelet |
fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cqvvdb |
Started |
Started container pull | |
openshift-etcd |
kubelet |
installer-10-master-0 |
Created |
Created container: installer | |
openshift-etcd |
kubelet |
installer-10-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-marketplace |
kubelet |
fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cqvvdb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" already present on machine | |
openshift-marketplace |
job-controller |
a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8e94a |
SuccessfulCreate |
Created pod: a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d4wlwn | |
openshift-marketplace |
job-controller |
695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb692b16c |
Completed |
Job completed | |
openshift-marketplace |
kubelet |
a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d4wlwn |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:8c27ac0dfae9e507601dc0a33ea19c8f757e744350a41f41b39c1cb8d60867b2" | |
openshift-marketplace |
multus |
a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d4wlwn |
AddedInterface |
Add eth0 [10.129.0.94/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d4wlwn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d4wlwn |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d4wlwn |
Started |
Started container util | |
openshift-marketplace |
job-controller |
8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2f2057 |
Completed |
Job completed | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-b84hw |
Started |
Started container fix-audit-permissions | |
openshift-marketplace |
job-controller |
fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbc47b |
Completed |
Job completed | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-b84hw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine | |
openshift-apiserver |
multus |
apiserver-65499f9774-b84hw |
AddedInterface |
Add eth0 [10.129.0.95/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-b84hw |
Created |
Created container: fix-audit-permissions | |
openshift-marketplace |
kubelet |
a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d4wlwn |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:8c27ac0dfae9e507601dc0a33ea19c8f757e744350a41f41b39c1cb8d60867b2" in 1.464s (1.464s including waiting). Image size: 4414581 bytes. | |
openshift-marketplace |
kubelet |
a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d4wlwn |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d4wlwn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-b84hw |
Started |
Started container openshift-apiserver | |
openshift-marketplace |
kubelet |
a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d4wlwn |
Created |
Created container: pull | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-b84hw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-b84hw |
Created |
Created container: openshift-apiserver | |
openshift-marketplace |
kubelet |
a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d4wlwn |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d4wlwn |
Started |
Started container pull | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-b84hw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-b84hw |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-65499f9774-b84hw |
Created |
Created container: openshift-apiserver-check-endpoints | |
openshift-marketplace |
job-controller |
a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8e94a |
Completed |
Job completed | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202509240837 |
RequirementsUnknown |
requirements not yet checked | |
(x2) | openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well") |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202509240837 |
RequirementsNotMet |
one or more requirements couldn't be found | |
cert-manager |
deployment-controller |
cert-manager |
ScalingReplicaSet |
Scaled up replica set cert-manager-7d4cc89fcb to 1 | |
cert-manager |
deployment-controller |
cert-manager-cainjector |
ScalingReplicaSet |
Scaled up replica set cert-manager-cainjector-7d9f95dbf to 1 | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-1 |
CreatedSCCRanges |
created SCC ranges for cert-manager namespace | |
(x8) | cert-manager |
replicaset-controller |
cert-manager-webhook-d969966f |
FailedCreate |
Error creating: pods "cert-manager-webhook-d969966f-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found |
cert-manager |
deployment-controller |
cert-manager-webhook |
ScalingReplicaSet |
Scaled up replica set cert-manager-webhook-d969966f to 1 | |
cert-manager |
replicaset-controller |
cert-manager-webhook-d969966f |
SuccessfulCreate |
Created pod: cert-manager-webhook-d969966f-ddrnx | |
cert-manager |
kubelet |
cert-manager-webhook-d969966f-ddrnx |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:96d51e3a64bf30cbd92836c7cbd82f06edca16eef78ab1432757d34c16628659" | |
cert-manager |
multus |
cert-manager-webhook-d969966f-ddrnx |
AddedInterface |
Add eth0 [10.130.0.35/23] from ovn-kubernetes | |
(x10) | cert-manager |
replicaset-controller |
cert-manager-cainjector-7d9f95dbf |
FailedCreate |
Error creating: pods "cert-manager-cainjector-7d9f95dbf-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found |
(x2) | openshift-operators |
controllermanager |
obo-prometheus-operator-admission-webhook |
NoPods |
No matching pods found |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.2.2 |
RequirementsUnknown |
requirements not yet checked | |
cert-manager |
replicaset-controller |
cert-manager-cainjector-7d9f95dbf |
SuccessfulCreate |
Created pod: cert-manager-cainjector-7d9f95dbf-pxbjj | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.2.2 |
RequirementsNotMet |
one or more requirements couldn't be found | |
cert-manager |
kubelet |
cert-manager-webhook-d969966f-ddrnx |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:96d51e3a64bf30cbd92836c7cbd82f06edca16eef78ab1432757d34c16628659" in 4.401s (4.401s including waiting). Image size: 427067271 bytes. | |
cert-manager |
kubelet |
cert-manager-webhook-d969966f-ddrnx |
Created |
Created container: cert-manager-webhook | |
cert-manager |
multus |
cert-manager-cainjector-7d9f95dbf-pxbjj |
AddedInterface |
Add eth0 [10.130.0.36/23] from ovn-kubernetes | |
cert-manager |
kubelet |
cert-manager-webhook-d969966f-ddrnx |
Started |
Started container cert-manager-webhook | |
cert-manager |
kubelet |
cert-manager-cainjector-7d9f95dbf-pxbjj |
Started |
Started container cert-manager-cainjector | |
cert-manager |
kubelet |
cert-manager-cainjector-7d9f95dbf-pxbjj |
Created |
Created container: cert-manager-cainjector | |
cert-manager |
kubelet |
cert-manager-cainjector-7d9f95dbf-pxbjj |
Pulled |
Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:96d51e3a64bf30cbd92836c7cbd82f06edca16eef78ab1432757d34c16628659" already present on machine | |
(x12) | cert-manager |
replicaset-controller |
cert-manager-7d4cc89fcb |
FailedCreate |
Error creating: pods "cert-manager-7d4cc89fcb-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.2.2 |
AllRequirementsMet |
all requirements found, attempting install | |
metallb-system |
replicaset-controller |
metallb-operator-controller-manager-6479dd8558 |
SuccessfulCreate |
Created pod: metallb-operator-controller-manager-6479dd8558-s545w | |
metallb-system |
replicaset-controller |
metallb-operator-webhook-server-6d98fdfb58 |
SuccessfulCreate |
Created pod: metallb-operator-webhook-server-6d98fdfb58-5gp8d | |
metallb-system |
deployment-controller |
metallb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set metallb-operator-controller-manager-6479dd8558 to 1 | |
metallb-system |
deployment-controller |
metallb-operator-webhook-server |
ScalingReplicaSet |
Scaled up replica set metallb-operator-webhook-server-6d98fdfb58 to 1 | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.2.2 |
InstallSucceeded |
waiting for install components to report healthy | |
openshift-operators |
deployment-controller |
perses-operator |
ScalingReplicaSet |
Scaled up replica set perses-operator-54bc95c9fb to 1 | |
openshift-operators |
replicaset-controller |
perses-operator-54bc95c9fb |
SuccessfulCreate |
Created pod: perses-operator-54bc95c9fb-k5626 | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202509241752 |
RequirementsUnknown |
requirements not yet checked | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-7cb968574c |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-7cb968574c-6c6cd | |
openshift-operators |
deployment-controller |
observability-operator |
ScalingReplicaSet |
Scaled up replica set observability-operator-cc5f78dfc to 1 | |
metallb-system |
kubelet |
metallb-operator-webhook-server-6d98fdfb58-5gp8d |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:fbb4aaad8aa681dfebc89dec05ff22cd002c72df1421ced8602fe29efce4afa7" | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-7cb968574c-6c6cd |
AddedInterface |
Add eth0 [10.130.0.40/23] from ovn-kubernetes | |
openshift-operators |
replicaset-controller |
observability-operator-cc5f78dfc |
SuccessfulCreate |
Created pod: observability-operator-cc5f78dfc-xm62s | |
metallb-system |
multus |
metallb-operator-webhook-server-6d98fdfb58-5gp8d |
AddedInterface |
Add eth0 [10.130.0.38/23] from ovn-kubernetes | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-7cb968574c |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-7c8cf85677 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-7c8cf85677-w5k2h | |
metallb-system |
kubelet |
metallb-operator-controller-manager-6479dd8558-s545w |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:f21e7d47f7a17f6b3520c17b8f32cbff2ae3129811d3242e08c9c48a9fbf3fbe" | |
openshift-operators |
deployment-controller |
obo-prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-admission-webhook-7cb968574c to 2 | |
openshift-operators |
deployment-controller |
obo-prometheus-operator |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-7c8cf85677 to 1 | |
metallb-system |
multus |
metallb-operator-controller-manager-6479dd8558-s545w |
AddedInterface |
Add eth0 [10.130.0.37/23] from ovn-kubernetes | |
openshift-operators |
multus |
obo-prometheus-operator-7c8cf85677-w5k2h |
AddedInterface |
Add eth0 [10.130.0.39/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
obo-prometheus-operator-7c8cf85677-w5k2h |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e2681bce57dc9c15701f5591532c2dfe8f19778606661339553a28dc003dbca5" | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7cb968574c-6c6cd |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:e54c1e1301be66933f3ecb01d5a0ca27f58aabfd905b18b7d057bbf23bdb7b0d" | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:e54c1e1301be66933f3ecb01d5a0ca27f58aabfd905b18b7d057bbf23bdb7b0d" | |
openshift-operators |
kubelet |
observability-operator-cc5f78dfc-xm62s |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:27ffe36aad6e606e6d0a211f48f3cdb58a53aa0d5e8ead6a444427231261ab9e" | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h |
AddedInterface |
Add eth0 [10.128.0.103/23] from ovn-kubernetes | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.2.2 |
InstallWaiting |
installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability. | |
openshift-operators |
multus |
observability-operator-cc5f78dfc-xm62s |
AddedInterface |
Add eth0 [10.130.0.41/23] from ovn-kubernetes | |
openshift-operators |
multus |
perses-operator-54bc95c9fb-k5626 |
AddedInterface |
Add eth0 [10.130.0.42/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
perses-operator-54bc95c9fb-k5626 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/perses-0-1-rhel9-operator@sha256:bfed9f442aea6e8165644f1dc615beea06ec7fd84ea3f8ca393a63d3627c6a7c" | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202509241752 |
AllRequirementsMet |
all requirements found, attempting install | |
openshift-nmstate |
replicaset-controller |
nmstate-operator-858ddd8f98 |
SuccessfulCreate |
Created pod: nmstate-operator-858ddd8f98-7gf7t | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202509241752 |
InstallSucceeded |
waiting for install components to report healthy | |
openshift-nmstate |
deployment-controller |
nmstate-operator |
ScalingReplicaSet |
Scaled up replica set nmstate-operator-858ddd8f98 to 1 | |
metallb-system |
operator-lifecycle-manager |
install-t68m8 |
AppliedWithWarnings |
1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202509240837" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2 | |
openshift-etcd |
static-pod-installer |
installer-10-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 10 | |
openshift-etcd |
kubelet |
etcd-master-0 |
Killing |
Stopping container etcdctl | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202509241752 |
InstallWaiting |
installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability. | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202509240837 |
NeedsReinstall |
calculated deployment install is bad | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7cb968574c-6c6cd |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:e54c1e1301be66933f3ecb01d5a0ca27f58aabfd905b18b7d057bbf23bdb7b0d" in 4.12s (4.12s including waiting). Image size: 259020765 bytes. | |
(x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202509240837 |
AllRequirementsMet |
all requirements found, attempting install |
metallb-system |
kubelet |
metallb-operator-webhook-server-6d98fdfb58-5gp8d |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:fbb4aaad8aa681dfebc89dec05ff22cd002c72df1421ced8602fe29efce4afa7" in 7.479s (7.479s including waiting). Image size: 548128129 bytes. | |
openshift-operators |
kubelet |
perses-operator-54bc95c9fb-k5626 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-0-1-rhel9-operator@sha256:bfed9f442aea6e8165644f1dc615beea06ec7fd84ea3f8ca393a63d3627c6a7c" in 6.391s (6.391s including waiting). Image size: 282294544 bytes. | |
metallb-system |
kubelet |
metallb-operator-controller-manager-6479dd8558-s545w |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:f21e7d47f7a17f6b3520c17b8f32cbff2ae3129811d3242e08c9c48a9fbf3fbe" in 7.684s (7.684s including waiting). Image size: 455553147 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-7c8cf85677-w5k2h |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e2681bce57dc9c15701f5591532c2dfe8f19778606661339553a28dc003dbca5" in 6.993s (6.993s including waiting). Image size: 303611421 bytes. | |
openshift-operators |
kubelet |
observability-operator-cc5f78dfc-xm62s |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:27ffe36aad6e606e6d0a211f48f3cdb58a53aa0d5e8ead6a444427231261ab9e" in 6.74s (6.74s including waiting). Image size: 488768835 bytes. | |
cert-manager |
replicaset-controller |
cert-manager-7d4cc89fcb |
SuccessfulCreate |
Created pod: cert-manager-7d4cc89fcb-mcxbx | |
openshift-operators |
kubelet |
obo-prometheus-operator-7c8cf85677-w5k2h |
Started |
Started container prometheus-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-7c8cf85677-w5k2h |
Created |
Created container: prometheus-operator | |
cert-manager |
multus |
cert-manager-7d4cc89fcb-mcxbx |
AddedInterface |
Add eth0 [10.130.0.44/23] from ovn-kubernetes | |
cert-manager |
kubelet |
cert-manager-7d4cc89fcb-mcxbx |
Pulled |
Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:96d51e3a64bf30cbd92836c7cbd82f06edca16eef78ab1432757d34c16628659" already present on machine | |
cert-manager |
kubelet |
cert-manager-7d4cc89fcb-mcxbx |
Created |
Created container: cert-manager-controller | |
openshift-operators |
kubelet |
perses-operator-54bc95c9fb-k5626 |
Created |
Created container: perses-operator | |
cert-manager |
kubelet |
cert-manager-7d4cc89fcb-mcxbx |
Started |
Started container cert-manager-controller | |
openshift-operators |
kubelet |
observability-operator-cc5f78dfc-xm62s |
Started |
Started container operator | |
openshift-operators |
kubelet |
observability-operator-cc5f78dfc-xm62s |
Created |
Created container: operator | |
metallb-system |
metallb-operator-controller-manager-6479dd8558-s545w_4245aac1-5887-445f-9494-92b01953ec4f |
metallb.io.metallboperator |
LeaderElection |
metallb-operator-controller-manager-6479dd8558-s545w_4245aac1-5887-445f-9494-92b01953ec4f became leader | |
metallb-system |
kubelet |
metallb-operator-controller-manager-6479dd8558-s545w |
Created |
Created container: manager | |
metallb-system |
kubelet |
metallb-operator-controller-manager-6479dd8558-s545w |
Started |
Started container manager | |
metallb-system |
kubelet |
metallb-operator-webhook-server-6d98fdfb58-5gp8d |
Created |
Created container: webhook-server | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7cb968574c-6c6cd |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7cb968574c-6c6cd |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
perses-operator-54bc95c9fb-k5626 |
Started |
Started container perses-operator | |
openshift-nmstate |
kubelet |
nmstate-operator-858ddd8f98-7gf7t |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:4a7b1d0616659315824d4b04d8b3d0ba8c940d405803b7f89bacd0174b1e0d7f" | |
openshift-nmstate |
multus |
nmstate-operator-858ddd8f98-7gf7t |
AddedInterface |
Add eth0 [10.130.0.43/23] from ovn-kubernetes | |
metallb-system |
kubelet |
metallb-operator-webhook-server-6d98fdfb58-5gp8d |
Started |
Started container webhook-server | |
(x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202509240837 |
InstallSucceeded |
waiting for install components to report healthy |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:e54c1e1301be66933f3ecb01d5a0ca27f58aabfd905b18b7d057bbf23bdb7b0d" in 8.588s (8.588s including waiting). Image size: 259020765 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-7cb968574c-ghw8h |
Created |
Created container: prometheus-operator-admission-webhook | |
assisted-installer |
job-controller |
assisted-installer-controller |
Completed |
Job completed | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.2.2 |
InstallWaiting |
installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability. | |
(x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202509240837 |
InstallWaiting |
installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability. |
(x11) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-master-0 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-nmstate |
kubelet |
nmstate-operator-858ddd8f98-7gf7t |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:4a7b1d0616659315824d4b04d8b3d0ba8c940d405803b7f89bacd0174b1e0d7f" in 7.533s (7.533s including waiting). Image size: 444452026 bytes. | |
openshift-nmstate |
kubelet |
nmstate-operator-858ddd8f98-7gf7t |
Created |
Created container: nmstate-operator | |
openshift-nmstate |
kubelet |
nmstate-operator-858ddd8f98-7gf7t |
Started |
Started container nmstate-operator | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202509241752 |
InstallSucceeded |
install strategy completed with no errors | |
(x12) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-master-0 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed |
kube-system |
cert-manager-cainjector-7d9f95dbf-pxbjj_c745c4da-9265-432b-bef1-31e1cca35310 |
cert-manager-cainjector-leader-election |
LeaderElection |
cert-manager-cainjector-7d9f95dbf-pxbjj_c745c4da-9265-432b-bef1-31e1cca35310 became leader | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.2.2 |
InstallSucceeded |
install strategy completed with no errors | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-fsynccontroller |
etcd-operator |
EtcdLeaderChangeMetrics |
Detected leader change increase of 2.2222222222222223 over 5 minutes on "None"; disk metrics are: etcd-master-0=0.008360,etcd-master-1=0.015040,etcd-master-2=0.055120. Most often this is as a result of inadequate storage or sometimes due to networking issues. | |
kube-system |
cert-manager-leader-election |
cert-manager-controller |
LeaderElection |
cert-manager-7d4cc89fcb-mcxbx-external-cert-manager-controller became leader | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
(x8) | openshift-etcd |
kubelet |
etcd-guard-master-0 |
ProbeError |
Readiness probe error: Get "https://192.168.34.10:9980/readyz": dial tcp 192.168.34.10:9980: connect: connection refused body: |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: setup | |
(x8) | openshift-etcd |
kubelet |
etcd-guard-master-0 |
Unhealthy |
Readiness probe failed: Get "https://192.168.34.10:9980/readyz": dial tcp 192.168.34.10:9980: connect: connection refused |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-resources-copy | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202509240837 |
InstallSucceeded |
install strategy completed with no errors | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcdctl | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-rev | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29340810 |
SuccessfulCreate |
Created pod: collect-profiles-29340810-2nzff | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29340810-2nzff |
AddedInterface |
Add eth0 [10.128.0.104/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29340810-2nzff |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29340810 | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29340810-2nzff |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29340810-2nzff |
Started |
Started container collect-profiles | |
metallb-system |
deployment-controller |
frr-k8s-webhook-server |
ScalingReplicaSet |
Scaled up replica set frr-k8s-webhook-server-64bf5d555 to 1 | |
metallb-system |
replicaset-controller |
frr-k8s-webhook-server-64bf5d555 |
SuccessfulCreate |
Created pod: frr-k8s-webhook-server-64bf5d555-sgx9c | |
metallb-system |
daemonset-controller |
speaker |
SuccessfulCreate |
Created pod: speaker-kp26f | |
metallb-system |
daemonset-controller |
speaker |
SuccessfulCreate |
Created pod: speaker-7mkjj | |
metallb-system |
daemonset-controller |
speaker |
SuccessfulCreate |
Created pod: speaker-hfvls | |
metallb-system |
kubelet |
speaker-hfvls |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "speaker-certs-secret" not found | |
metallb-system |
deployment-controller |
controller |
ScalingReplicaSet |
Scaled up replica set controller-68d546b9d8 to 1 | |
metallb-system |
replicaset-controller |
controller-68d546b9d8 |
SuccessfulCreate |
Created pod: controller-68d546b9d8-9strj | |
metallb-system |
daemonset-controller |
frr-k8s |
SuccessfulCreate |
Created pod: frr-k8s-nnbg4 | |
metallb-system |
daemonset-controller |
frr-k8s |
SuccessfulCreate |
Created pod: frr-k8s-2pxml | |
metallb-system |
daemonset-controller |
frr-k8s |
SuccessfulCreate |
Created pod: frr-k8s-qqrwm | |
default |
garbage-collector-controller |
frr-k8s-validating-webhook-configuration |
OwnerRefInvalidNamespace |
ownerRef [metallb.io/v1beta1/MetalLB, namespace: , name: metallb, uid: 1b200ddb-097b-4f2f-9567-6f6d636e1da2] does not exist in namespace "" | |
metallb-system |
kubelet |
controller-68d546b9d8-9strj |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" | |
(x2) | metallb-system |
kubelet |
speaker-7mkjj |
FailedMount |
MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found |
metallb-system |
multus |
controller-68d546b9d8-9strj |
AddedInterface |
Add eth0 [10.130.0.46/23] from ovn-kubernetes | |
metallb-system |
kubelet |
controller-68d546b9d8-9strj |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:fbb4aaad8aa681dfebc89dec05ff22cd002c72df1421ced8602fe29efce4afa7" already present on machine | |
openshift-etcd |
kubelet |
etcd-guard-master-0 |
Unhealthy |
Readiness probe failed: Get "https://192.168.34.10:9980/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
metallb-system |
kubelet |
controller-68d546b9d8-9strj |
Created |
Created container: controller | |
metallb-system |
kubelet |
controller-68d546b9d8-9strj |
Started |
Started container controller | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29340810 |
Completed |
Job completed | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29340810, condition: Complete | |
metallb-system |
kubelet |
frr-k8s-webhook-server-64bf5d555-sgx9c |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" | |
(x2) | metallb-system |
kubelet |
speaker-hfvls |
FailedMount |
MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found |
metallb-system |
kubelet |
frr-k8s-2pxml |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" | |
(x2) | metallb-system |
kubelet |
speaker-kp26f |
FailedMount |
MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" | |
metallb-system |
multus |
frr-k8s-webhook-server-64bf5d555-sgx9c |
AddedInterface |
Add eth0 [10.130.0.45/23] from ovn-kubernetes | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" | |
openshift-etcd |
kubelet |
etcd-guard-master-0 |
ProbeError |
Readiness probe error: Get "https://192.168.34.10:9980/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body: | |
metallb-system |
kubelet |
speaker-hfvls |
Created |
Created container: speaker | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
metallb-system |
kubelet |
speaker-7mkjj |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:fbb4aaad8aa681dfebc89dec05ff22cd002c72df1421ced8602fe29efce4afa7" | |
metallb-system |
kubelet |
speaker-hfvls |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:fbb4aaad8aa681dfebc89dec05ff22cd002c72df1421ced8602fe29efce4afa7" already present on machine | |
metallb-system |
kubelet |
speaker-hfvls |
Started |
Started container speaker | |
metallb-system |
kubelet |
speaker-hfvls |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" | |
metallb-system |
kubelet |
speaker-kp26f |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:fbb4aaad8aa681dfebc89dec05ff22cd002c72df1421ced8602fe29efce4afa7" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
metallb-system |
kubelet |
controller-68d546b9d8-9strj |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" in 1.42s (1.42s including waiting). Image size: 458126368 bytes. | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
metallb-system |
kubelet |
controller-68d546b9d8-9strj |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
metallb-system |
kubelet |
controller-68d546b9d8-9strj |
Started |
Started container kube-rbac-proxy | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
cert-regeneration-controller |
openshift-kube-apiserver |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:localhost-recovery-client" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:scc:restricted-v2" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "cluster-admin" not found, clusterrole.rbac.authorization.k8s.io "helm-chartrepos-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "console-extensions-reader" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found] | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
metallb-system |
kubelet |
speaker-hfvls |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" in 1.336s (1.336s including waiting). Image size: 458126368 bytes. | |
metallb-system |
kubelet |
speaker-hfvls |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-hfvls |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
deployment-controller |
nmstate-metrics |
ScalingReplicaSet |
Scaled up replica set nmstate-metrics-fdff9cb8d to 1 | |
openshift-nmstate |
kubelet |
nmstate-handler-lkd88 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"console.openshift.io" "consoleplugins" "" "nmstate-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] | |
openshift-nmstate |
daemonset-controller |
nmstate-handler |
SuccessfulCreate |
Created pod: nmstate-handler-sbvf7 | |
(x3) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapUpdated |
Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml |
openshift-nmstate |
daemonset-controller |
nmstate-handler |
SuccessfulCreate |
Created pod: nmstate-handler-g87gn | |
openshift-nmstate |
deployment-controller |
nmstate-console-plugin |
ScalingReplicaSet |
Scaled up replica set nmstate-console-plugin-6b874cbd85 to 1 | |
openshift-nmstate |
replicaset-controller |
nmstate-console-plugin-6b874cbd85 |
SuccessfulCreate |
Created pod: nmstate-console-plugin-6b874cbd85-h8v5p | |
openshift-nmstate |
replicaset-controller |
nmstate-metrics-fdff9cb8d |
SuccessfulCreate |
Created pod: nmstate-metrics-fdff9cb8d-j4j8c | |
openshift-nmstate |
replicaset-controller |
nmstate-webhook-6cdbc54649 |
SuccessfulCreate |
Created pod: nmstate-webhook-6cdbc54649-bj7wk | |
openshift-nmstate |
deployment-controller |
nmstate-webhook |
ScalingReplicaSet |
Scaled up replica set nmstate-webhook-6cdbc54649 to 1 | |
openshift-nmstate |
daemonset-controller |
nmstate-handler |
SuccessfulCreate |
Created pod: nmstate-handler-lkd88 | |
openshift-nmstate |
kubelet |
nmstate-handler-sbvf7 |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412" | |
(x3) | openshift-etcd |
kubelet |
etcd-guard-master-0 |
ProbeError |
Readiness probe error: Get "https://192.168.34.10:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: |
openshift-console |
replicaset-controller |
console-5958979c8 |
SuccessfulCreate |
Created pod: console-5958979c8-p9l2s | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-5958979c8 to 2 | |
openshift-console |
replicaset-controller |
console-5958979c8 |
SuccessfulCreate |
Created pod: console-5958979c8-mpc88 | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
(x6) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/console -n openshift-console because it changed |
openshift-console |
kubelet |
console-564c479f-s9vtn |
Killing |
Stopping container console | |
openshift-console |
replicaset-controller |
console-564c479f |
SuccessfulDelete |
Deleted pod: console-564c479f-s9vtn | |
(x3) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected") |
openshift-nmstate |
multus |
nmstate-metrics-fdff9cb8d-j4j8c |
AddedInterface |
Add eth0 [10.130.0.47/23] from ovn-kubernetes | |
openshift-nmstate |
multus |
nmstate-webhook-6cdbc54649-bj7wk |
AddedInterface |
Add eth0 [10.130.0.48/23] from ovn-kubernetes | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-564c479f to 1 from 2 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.25, 1 replicas available" | |
metallb-system |
kubelet |
speaker-kp26f |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:fbb4aaad8aa681dfebc89dec05ff22cd002c72df1421ced8602fe29efce4afa7" in 5.798s (5.798s including waiting). Image size: 548128129 bytes. | |
metallb-system |
kubelet |
speaker-kp26f |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" | |
openshift-nmstate |
kubelet |
nmstate-handler-g87gn |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412" | |
openshift-nmstate |
kubelet |
nmstate-metrics-fdff9cb8d-j4j8c |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412" | |
metallb-system |
kubelet |
speaker-kp26f |
Created |
Created container: speaker | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" in 7.349s (7.349s including waiting). Image size: 664216528 bytes. | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Created |
Created container: cp-frr-files | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Started |
Started container cp-frr-files | |
metallb-system |
kubelet |
speaker-kp26f |
Started |
Started container speaker | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Started |
Started container cp-frr-files | |
metallb-system |
kubelet |
frr-k8s-webhook-server-64bf5d555-sgx9c |
Started |
Started container frr-k8s-webhook-server | |
metallb-system |
kubelet |
frr-k8s-webhook-server-64bf5d555-sgx9c |
Created |
Created container: frr-k8s-webhook-server | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, master-0 is unhealthy" | |
metallb-system |
kubelet |
frr-k8s-webhook-server-64bf5d555-sgx9c |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" in 7.404s (7.404s including waiting). Image size: 664216528 bytes. | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" in 7.694s (7.694s including waiting). Image size: 664216528 bytes. | |
openshift-nmstate |
kubelet |
nmstate-webhook-6cdbc54649-bj7wk |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412" | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Created |
Created container: cp-frr-files | |
openshift-nmstate |
multus |
nmstate-console-plugin-6b874cbd85-h8v5p |
AddedInterface |
Add eth0 [10.128.0.105/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-5958979c8-p9l2s |
Started |
Started container console | |
metallb-system |
kubelet |
speaker-7mkjj |
Started |
Started container speaker | |
metallb-system |
kubelet |
speaker-7mkjj |
Created |
Created container: speaker | |
metallb-system |
kubelet |
speaker-7mkjj |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:fbb4aaad8aa681dfebc89dec05ff22cd002c72df1421ced8602fe29efce4afa7" in 6.795s (6.795s including waiting). Image size: 548128129 bytes. | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" in 8.369s (8.369s including waiting). Image size: 664216528 bytes. | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Created |
Created container: cp-frr-files | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Started |
Started container cp-frr-files | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Started |
Started container cp-reloader | |
metallb-system |
kubelet |
speaker-kp26f |
Started |
Started container kube-rbac-proxy | |
openshift-console |
kubelet |
console-5958979c8-p9l2s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" already present on machine | |
metallb-system |
kubelet |
speaker-kp26f |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-kp26f |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" in 1.086s (1.086s including waiting). Image size: 458126368 bytes. | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Created |
Created container: cp-reloader | |
openshift-console |
kubelet |
console-5958979c8-p9l2s |
Created |
Created container: console | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Started |
Started container cp-reloader | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Created |
Created container: cp-reloader | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-6b874cbd85-h8v5p |
Pulling |
Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:98eacebec0d128e18d9109d3eadb5a1470ec990f11ad3e717a6638a8675d6e66" | |
metallb-system |
kubelet |
speaker-7mkjj |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Started |
Started container cp-reloader | |
openshift-console |
multus |
console-5958979c8-p9l2s |
AddedInterface |
Add eth0 [10.128.0.106/23] from ovn-kubernetes | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Created |
Created container: cp-reloader | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-webhook-6cdbc54649-bj7wk |
Created |
Created container: nmstate-webhook | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-webhook-6cdbc54649-bj7wk |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412" in 2.325s (2.325s including waiting). Image size: 490751413 bytes. | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Created |
Created container: cp-metrics | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Started |
Started container cp-metrics | |
openshift-nmstate |
kubelet |
nmstate-handler-sbvf7 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412" in 4.714s (4.714s including waiting). Image size: 490751413 bytes. | |
openshift-nmstate |
kubelet |
nmstate-webhook-6cdbc54649-bj7wk |
Started |
Started container nmstate-webhook | |
openshift-nmstate |
kubelet |
nmstate-metrics-fdff9cb8d-j4j8c |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412" in 2.336s (2.336s including waiting). Image size: 490751413 bytes. | |
openshift-nmstate |
kubelet |
nmstate-metrics-fdff9cb8d-j4j8c |
Created |
Created container: nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-metrics-fdff9cb8d-j4j8c |
Started |
Started container nmstate-metrics | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Created |
Created container: cp-metrics | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Started |
Started container cp-metrics | |
openshift-nmstate |
kubelet |
nmstate-metrics-fdff9cb8d-j4j8c |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" already present on machine | |
metallb-system |
kubelet |
speaker-7mkjj |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" in 1.455s (1.455s including waiting). Image size: 458126368 bytes. | |
openshift-nmstate |
kubelet |
nmstate-metrics-fdff9cb8d-j4j8c |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Started |
Started container reloader | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Created |
Created container: reloader | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Started |
Started container frr | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Created |
Created container: frr | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Started |
Started container controller | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Created |
Created container: controller | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Created |
Created container: cp-metrics | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Started |
Started container cp-metrics | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Created |
Created container: controller | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Started |
Started container controller | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Created |
Created container: frr | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Started |
Started container frr | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Created |
Created container: reloader | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Started |
Started container reloader | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Created |
Created container: frr-metrics | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Started |
Started container frr-metrics | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" already present on machine | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-qqrwm |
Started |
Started container kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-handler-g87gn |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412" in 2.866s (2.866s including waiting). Image size: 490751413 bytes. | |
openshift-nmstate |
kubelet |
nmstate-handler-g87gn |
Created |
Created container: nmstate-handler | |
openshift-nmstate |
kubelet |
nmstate-handler-g87gn |
Started |
Started container nmstate-handler | |
openshift-nmstate |
kubelet |
nmstate-handler-lkd88 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412" in 5.14s (5.14s including waiting). Image size: 490751413 bytes. | |
openshift-nmstate |
kubelet |
nmstate-handler-lkd88 |
Created |
Created container: nmstate-handler | |
openshift-nmstate |
kubelet |
nmstate-handler-lkd88 |
Started |
Started container nmstate-handler | |
openshift-nmstate |
kubelet |
nmstate-handler-sbvf7 |
Created |
Created container: nmstate-handler | |
openshift-nmstate |
kubelet |
nmstate-handler-sbvf7 |
Started |
Started container nmstate-handler | |
metallb-system |
kubelet |
speaker-7mkjj |
Started |
Started container kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-metrics-fdff9cb8d-j4j8c |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-7mkjj |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Started |
Started container controller | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" already present on machine | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Started |
Started container frr-metrics | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Started |
Started container frr-metrics | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Created |
Created container: frr-metrics | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-6b874cbd85-h8v5p |
Created |
Created container: nmstate-console-plugin | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Started |
Started container frr | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Created |
Created container: frr | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-6b874cbd85-h8v5p |
Started |
Started container nmstate-console-plugin | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-6b874cbd85-h8v5p |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:98eacebec0d128e18d9109d3eadb5a1470ec990f11ad3e717a6638a8675d6e66" in 2.474s (2.474s including waiting). Image size: 446311450 bytes. | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Created |
Created container: controller | |
metallb-system |
kubelet |
frr-k8s-2pxml |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
ProbeError |
Startup probe error: Get "https://192.168.34.10:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Created |
Created container: reloader | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Started |
Started container reloader | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Created |
Created container: frr-metrics | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-nnbg4 |
Started |
Started container kube-rbac-proxy | |
openshift-console |
replicaset-controller |
console-564c479f |
SuccessfulDelete |
Deleted pod: console-564c479f-7bglk | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-564c479f to 0 from 1 | |
openshift-console |
kubelet |
console-564c479f-7bglk |
Killing |
Stopping container console | |
(x3) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from True to False ("All is well") |
(x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.25, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.25, 2 replicas available" |
openshift-etcd-operator |
openshift-cluster-etcd-operator-fsynccontroller |
etcd-operator |
EtcdLeaderChangeMetrics |
Detected leader change increase of 2.235317577036233 over 5 minutes on "None"; disk metrics are: etcd-master-0=0.012465,etcd-master-1=0.015389,etcd-master-2=0.113772. Most often this is as a result of inadequate storage or sometimes due to networking issues. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 5 to 6 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 5; 1 node is at revision 6" to "NodeInstallerProgressing: 1 node is at revision 5; 2 nodes are at revision 6",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 5; 1 node is at revision 6" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 5; 2 nodes are at revision 6" | |
openshift-console |
kubelet |
console-5958979c8-mpc88 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" already present on machine | |
openshift-console |
kubelet |
console-5958979c8-mpc88 |
Created |
Created container: console | |
openshift-console |
kubelet |
console-5958979c8-mpc88 |
Started |
Started container console | |
openshift-console |
multus |
console-5958979c8-mpc88 |
AddedInterface |
Add eth0 [10.130.0.49/23] from ovn-kubernetes | |
openshift-storage |
daemonset-controller |
vg-manager |
SuccessfulCreate |
Created pod: vg-manager-zvnk6 | |
openshift-storage |
daemonset-controller |
vg-manager |
SuccessfulCreate |
Created pod: vg-manager-jdht5 | |
openshift-storage |
daemonset-controller |
vg-manager |
SuccessfulCreate |
Created pod: vg-manager-sr6bp | |
openshift-storage |
LVMClusterReconciler |
lvmcluster |
ResourceReconciliationIncomplete |
LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io | |
openshift-storage |
multus |
vg-manager-sr6bp |
AddedInterface |
Add eth0 [10.130.0.50/23] from ovn-kubernetes | |
openshift-storage |
multus |
vg-manager-zvnk6 |
AddedInterface |
Add eth0 [10.129.0.96/23] from ovn-kubernetes | |
openshift-storage |
kubelet |
vg-manager-zvnk6 |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:80f5e48600d9b42add118f1325ced01f49d544a9a4824a7da4e3ba805d64371f" | |
openshift-storage |
multus |
vg-manager-jdht5 |
AddedInterface |
Add eth0 [10.128.0.107/23] from ovn-kubernetes | |
openshift-storage |
kubelet |
vg-manager-jdht5 |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:80f5e48600d9b42add118f1325ced01f49d544a9a4824a7da4e3ba805d64371f" | |
(x12) | openshift-storage |
LVMClusterReconciler |
lvmcluster |
ResourceReconciliationIncomplete |
LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io DaemonSet is not considered ready: the DaemonSet is not ready: openshift-storage/vg-manager. 0 out of 2 expected pods are ready |
(x2) | openshift-storage |
kubelet |
vg-manager-sr6bp |
Started |
Started container vg-manager |
(x2) | openshift-storage |
kubelet |
vg-manager-sr6bp |
Created |
Created container: vg-manager |
(x2) | openshift-storage |
kubelet |
vg-manager-sr6bp |
Pulled |
Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:80f5e48600d9b42add118f1325ced01f49d544a9a4824a7da4e3ba805d64371f" already present on machine |
(x2) | openshift-storage |
LVMClusterReconciler |
lvmcluster |
ResourceReconciliationIncomplete |
LVMCluster's resources are not yet fully synchronized: csi node master-2 does not have driver topolvm.io DaemonSet is not considered ready: the DaemonSet is not ready: openshift-storage/vg-manager. 0 out of 2 expected pods are ready |
openshift-etcd-operator |
openshift-cluster-etcd-operator-fsynccontroller |
etcd-operator |
EtcdLeaderChangeMetrics |
Detected leader change increase of 2.2308245653505048 over 5 minutes on "None"; disk metrics are: etcd-master-0=0.012394,etcd-master-1=0.015389,etcd-master-2=0.128996. Most often this is as a result of inadequate storage or sometimes due to networking issues. | |
openshift-storage |
kubelet |
vg-manager-jdht5 |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:80f5e48600d9b42add118f1325ced01f49d544a9a4824a7da4e3ba805d64371f" in 5.041s (5.041s including waiting). Image size: 294806923 bytes. | |
openshift-storage |
kubelet |
vg-manager-zvnk6 |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:80f5e48600d9b42add118f1325ced01f49d544a9a4824a7da4e3ba805d64371f" in 5.615s (5.615s including waiting). Image size: 294806923 bytes. | |
(x10) | openshift-storage |
LVMClusterReconciler |
lvmcluster |
ResourceReconciliationIncomplete |
LVMCluster's resources are not yet fully synchronized: csi node master-1 does not have driver topolvm.io DaemonSet is not considered ready: the DaemonSet is not ready: openshift-storage/vg-manager. 0 out of 2 expected pods are ready |
openshift-storage |
kubelet |
vg-manager-jdht5 |
Pulled |
Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:80f5e48600d9b42add118f1325ced01f49d544a9a4824a7da4e3ba805d64371f" already present on machine | |
(x2) | openshift-storage |
kubelet |
vg-manager-jdht5 |
Started |
Started container vg-manager |
(x2) | openshift-storage |
kubelet |
vg-manager-jdht5 |
Created |
Created container: vg-manager |
openshift-storage |
kubelet |
vg-manager-zvnk6 |
Pulled |
Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:80f5e48600d9b42add118f1325ced01f49d544a9a4824a7da4e3ba805d64371f" already present on machine | |
(x2) | openshift-storage |
kubelet |
vg-manager-zvnk6 |
Created |
Created container: vg-manager |
(x2) | openshift-storage |
kubelet |
vg-manager-zvnk6 |
Started |
Started container vg-manager |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-1" from revision 5 to 6 because node master-1 with revision 5 is the oldest | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 8 to 10 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-6-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-6-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
multus |
installer-6-master-1 |
AddedInterface |
Add eth0 [10.128.0.108/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-1 |
CreatedSCCRanges |
created SCC ranges for openstack namespace | |
openshift-kube-apiserver |
kubelet |
installer-6-master-1 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-6-master-1 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-1 |
CreatedSCCRanges |
created SCC ranges for openstack-operators namespace | |
openstack-operators |
kubelet |
openstack-operator-index-ff576 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" | |
openstack-operators |
multus |
openstack-operator-index-ff576 |
AddedInterface |
Add eth0 [10.130.0.51/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-index-nw58t |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" | |
openstack-operators |
multus |
openstack-operator-index-nw58t |
AddedInterface |
Add eth0 [10.130.0.52/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-index-nw58t |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 5.466s (5.466s including waiting). Image size: 911633238 bytes. | |
openstack-operators |
kubelet |
openstack-operator-index-ff576 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 8.242s (8.242s including waiting). Image size: 911633238 bytes. | |
openstack-operators |
kubelet |
openstack-operator-index-ff576 |
Killing |
Stopping container registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-nw58t |
Started |
Started container registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-ff576 |
Started |
Started container registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-ff576 |
Created |
Created container: registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-nw58t |
Created |
Created container: registry-server | |
(x5) | default |
operator-lifecycle-manager |
openstack-operators |
ResolutionFailed |
error using catalogsource openstack-operators/openstack-operator-index: failed to list bundles: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 172.30.32.208:50051: i/o timeout" |
openstack-operators |
job-controller |
32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036a29478 |
SuccessfulCreate |
Created pod: 32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ajskqr | |
openstack-operators |
kubelet |
32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ajskqr |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:db2406fc6f4d16351b74a75ecc7148f821e0f917" | |
openstack-operators |
multus |
32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ajskqr |
AddedInterface |
Add eth0 [10.129.0.97/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ajskqr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openstack-operators |
kubelet |
32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ajskqr |
Started |
Started container util | |
openstack-operators |
kubelet |
32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ajskqr |
Created |
Created container: util | |
openstack-operators |
kubelet |
32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ajskqr |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:db2406fc6f4d16351b74a75ecc7148f821e0f917" in 1.125s (1.125s including waiting). Image size: 109629 bytes. | |
openstack-operators |
kubelet |
32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ajskqr |
Started |
Started container pull | |
openstack-operators |
kubelet |
32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ajskqr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" already present on machine | |
openstack-operators |
kubelet |
32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ajskqr |
Created |
Created container: pull | |
openstack-operators |
kubelet |
32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ajskqr |
Started |
Started container extract | |
openstack-operators |
kubelet |
32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036ajskqr |
Created |
Created container: extract | |
openstack-operators |
job-controller |
32da80840a2017f27ed4ad61f02adc64a25aa18e8dad0409953372036a29478 |
Completed |
Job completed | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
RequirementsUnknown |
requirements not yet checked | |
(x2) | openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
RequirementsNotMet |
one or more requirements couldn't be found |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
static-pod-installer |
installer-6-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-operator to become ready: waiting for spec update of deployment "openstack-operator-controller-operator" to be observed... | |
openstack-operators |
deployment-controller |
openstack-operator-controller-operator |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-operator-64895cd698 to 1 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-fsynccontroller |
etcd-operator |
EtcdLeaderChangeMetrics |
Detected leader change increase of 2.096370135111055 over 5 minutes on "None"; disk metrics are: etcd-master-0=0.007987,etcd-master-1=0.026542,etcd-master-2=0.116480. Most often this is as a result of inadequate storage or sometimes due to networking issues. | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-operator-64895cd698 |
SuccessfulCreate |
Created pod: openstack-operator-controller-operator-64895cd698-tkclq | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
AllRequirementsMet |
all requirements found, attempting install | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-64895cd698-tkclq |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:766f6cc606336f5197dc6f7b61bf140b28159516bf388f2ea65ed95013829a1c" | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-operator to become ready: deployment "openstack-operator-controller-operator" not available: Deployment does not have minimum availability. | |
openstack-operators |
multus |
openstack-operator-controller-operator-64895cd698-tkclq |
AddedInterface |
Add eth0 [10.130.0.53/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-master-1 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed | |
(x37) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-master-1 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openstack-operators |
kubelet |
openstack-operator-controller-operator-64895cd698-tkclq |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:766f6cc606336f5197dc6f7b61bf140b28159516bf388f2ea65ed95013829a1c" in 3.263s (3.263s including waiting). Image size: 265163335 bytes. | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-64895cd698-tkclq |
Created |
Created container: operator | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-64895cd698-tkclq |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" | |
openstack-operators |
openstack-operator-controller-operator-64895cd698-tkclq_9a8a01ba-fc65-47b8-83c5-fdbc06294a34 |
20ca801f.openstack.org |
LeaderElection |
openstack-operator-controller-operator-64895cd698-tkclq_9a8a01ba-fc65-47b8-83c5-fdbc06294a34 became leader | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-64895cd698-tkclq |
Started |
Started container operator | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-64895cd698-tkclq |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 2.149s (2.149s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-64895cd698-tkclq |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-64895cd698-tkclq |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-operator to become ready: deployment "openstack-operator-controller-operator" waiting for 1 outdated replica(s) to be terminated | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-operator-548bfb9499 |
SuccessfulCreate |
Created pod: openstack-operator-controller-operator-548bfb9499-crk7m | |
openstack-operators |
deployment-controller |
openstack-operator-controller-operator |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-operator-548bfb9499 to 1 | |
(x2) | openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
InstallSucceeded |
waiting for install components to report healthy |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
ComponentUnhealthy |
installing: deployment changed old hash=9LbpGQaz2QroTkZ6lrc4oIxMH0mYb0V5rdZlvj, new hash=7ZFBRXm6ant0bWD9djC9StqNVDFpHd1jWbS5rJ | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-548bfb9499-crk7m |
Created |
Created container: operator | |
openstack-operators |
multus |
openstack-operator-controller-operator-548bfb9499-crk7m |
AddedInterface |
Add eth0 [10.130.0.54/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-548bfb9499-crk7m |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:766f6cc606336f5197dc6f7b61bf140b28159516bf388f2ea65ed95013829a1c" already present on machine | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-548bfb9499-crk7m |
Pulled |
Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" already present on machine | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-548bfb9499-crk7m |
Started |
Started container operator | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-548bfb9499-crk7m |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-548bfb9499-crk7m |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-2_90d73f50-e2bb-4ed9-ac4e-41bd025a6d0c became leader | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-64895cd698-tkclq |
Killing |
Stopping container kube-rbac-proxy | |
(x2) | openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
InstallSucceeded |
install strategy completed with no errors |
openstack-operators |
replicaset-controller |
openstack-operator-controller-operator-64895cd698 |
SuccessfulDelete |
Deleted pod: openstack-operator-controller-operator-64895cd698-tkclq | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-64895cd698-tkclq |
Killing |
Stopping container operator | |
openstack-operators |
deployment-controller |
openstack-operator-controller-operator |
ScalingReplicaSet |
Scaled down replica set openstack-operator-controller-operator-64895cd698 to 0 from 1 | |
openshift-marketplace |
kubelet |
redhat-marketplace-ck2g8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
multus |
redhat-marketplace-ck2g8 |
AddedInterface |
Add eth0 [10.129.0.98/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-ck2g8 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-ck2g8 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-ck2g8 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-ck2g8 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-ck2g8 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-ck2g8 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 1.061s (1.061s including waiting). Image size: 1057212814 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-ck2g8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
redhat-marketplace-ck2g8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 538ms (538ms including waiting). Image size: 911296197 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-ck2g8 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-ck2g8 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-ck2g8 |
Killing |
Stopping container registry-server | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openstack-operators |
openstack-operator-controller-operator-548bfb9499-crk7m_20c7c54e-b998-4c05-a3a4-b20479232b78 |
20ca801f.openstack.org |
LeaderElection |
openstack-operator-controller-operator-548bfb9499-crk7m_20c7c54e-b998-4c05-a3a4-b20479232b78 became leader | |
openshift-marketplace |
multus |
community-operators-5zr4r |
AddedInterface |
Add eth0 [10.129.0.99/23] from ovn-kubernetes | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-marketplace |
kubelet |
community-operators-5zr4r |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-5zr4r |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-5zr4r |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-5zr4r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-5zr4r |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-5zr4r |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 905ms (905ms including waiting). Image size: 1181047702 bytes. | |
openshift-marketplace |
kubelet |
community-operators-5zr4r |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-5zr4r |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
community-operators-5zr4r |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 411ms (411ms including waiting). Image size: 911296197 bytes. | |
openshift-marketplace |
kubelet |
community-operators-5zr4r |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-5zr4r |
Started |
Started container registry-server | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-1 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-1 |
KubeAPIReadyz |
readyz=true | |
openstack-operators |
deployment-controller |
cinder-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set cinder-operator-controller-manager-5484486656 to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "infra-operator-serving-cert-7swxk" | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-serving-cert |
Requested |
Created new CertificateRequest resource "infra-operator-serving-cert-1" | |
openstack-operators |
replicaset-controller |
barbican-operator-controller-manager-658c7b459c |
SuccessfulCreate |
Created pod: barbican-operator-controller-manager-658c7b459c-pwzgf | |
openstack-operators |
deployment-controller |
barbican-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set barbican-operator-controller-manager-658c7b459c to 1 | |
openstack-operators |
replicaset-controller |
cinder-operator-controller-manager-5484486656 |
SuccessfulCreate |
Created pod: cinder-operator-controller-manager-5484486656-vvnpp | |
openstack-operators |
replicaset-controller |
heat-operator-controller-manager-68fc865f87 |
SuccessfulCreate |
Created pod: heat-operator-controller-manager-68fc865f87-c8wmp | |
openstack-operators |
replicaset-controller |
designate-operator-controller-manager-67d84b9cc |
SuccessfulCreate |
Created pod: designate-operator-controller-manager-67d84b9cc-698kz | |
openstack-operators |
deployment-controller |
test-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set test-operator-controller-manager-565dfd7bb9 to 1 | |
openstack-operators |
replicaset-controller |
test-operator-controller-manager-565dfd7bb9 |
SuccessfulCreate |
Created pod: test-operator-controller-manager-565dfd7bb9-c6fnn | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
telemetry-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set telemetry-operator-controller-manager-7585684bd7 to 1 | |
openstack-operators |
replicaset-controller |
telemetry-operator-controller-manager-7585684bd7 |
SuccessfulCreate |
Created pod: telemetry-operator-controller-manager-7585684bd7-5wlc8 | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
infra-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set infra-operator-controller-manager-d68fd5cdf to 1 | |
openstack-operators |
replicaset-controller |
infra-operator-controller-manager-d68fd5cdf |
SuccessfulCreate |
Created pod: infra-operator-controller-manager-d68fd5cdf-sbpvg | |
openstack-operators |
deployment-controller |
horizon-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set horizon-operator-controller-manager-54969ff695 to 1 | |
openstack-operators |
replicaset-controller |
horizon-operator-controller-manager-54969ff695 |
SuccessfulCreate |
Created pod: horizon-operator-controller-manager-54969ff695-hgxjt | |
openstack-operators |
replicaset-controller |
mariadb-operator-controller-manager-7f4856d67b |
SuccessfulCreate |
Created pod: mariadb-operator-controller-manager-7f4856d67b-sgjwb | |
openstack-operators |
deployment-controller |
heat-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set heat-operator-controller-manager-68fc865f87 to 1 | |
openstack-operators |
replicaset-controller |
ironic-operator-controller-manager-6b498574d4 |
SuccessfulCreate |
Created pod: ironic-operator-controller-manager-6b498574d4-tcqkg | |
openstack-operators |
deployment-controller |
swift-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set swift-operator-controller-manager-6d4f9d7767 to 1 | |
openstack-operators |
replicaset-controller |
swift-operator-controller-manager-6d4f9d7767 |
SuccessfulCreate |
Created pod: swift-operator-controller-manager-6d4f9d7767-dj7p8 | |
openstack-operators |
multus |
heat-operator-controller-manager-68fc865f87-c8wmp |
AddedInterface |
Add eth0 [10.130.0.56/23] from ovn-kubernetes | |
openstack-operators |
deployment-controller |
glance-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set glance-operator-controller-manager-59bd97c6b9 to 1 | |
openstack-operators |
replicaset-controller |
glance-operator-controller-manager-59bd97c6b9 |
SuccessfulCreate |
Created pod: glance-operator-controller-manager-59bd97c6b9-s2zqv | |
openstack-operators |
deployment-controller |
designate-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set designate-operator-controller-manager-67d84b9cc to 1 | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
deployment-controller |
ironic-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ironic-operator-controller-manager-6b498574d4 to 1 | |
openstack-operators |
replicaset-controller |
keystone-operator-controller-manager-f4487c759 |
SuccessfulCreate |
Created pod: keystone-operator-controller-manager-f4487c759-hdfw8 | |
openstack-operators |
multus |
designate-operator-controller-manager-67d84b9cc-698kz |
AddedInterface |
Add eth0 [10.128.0.110/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5484486656-vvnpp |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:c487a793648e64af2d64df5f6efbda2d4fd586acd7aee6838d3ec2b3edd9efb9" | |
openstack-operators |
multus |
cinder-operator-controller-manager-5484486656-vvnpp |
AddedInterface |
Add eth0 [10.128.0.109/23] from ovn-kubernetes | |
openstack-operators |
deployment-controller |
keystone-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set keystone-operator-controller-manager-f4487c759 to 1 | |
openstack-operators |
replicaset-controller |
manila-operator-controller-manager-6d78f57554 |
SuccessfulCreate |
Created pod: manila-operator-controller-manager-6d78f57554-t6sj6 | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-658c7b459c-pwzgf |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:783f711b4cb179819cfcb81167c3591c70671440f4551bbe48b7a8730567f577" | |
openstack-operators |
multus |
barbican-operator-controller-manager-658c7b459c-pwzgf |
AddedInterface |
Add eth0 [10.130.0.55/23] from ovn-kubernetes | |
openstack-operators |
replicaset-controller |
neutron-operator-controller-manager-7c95684bcc |
SuccessfulCreate |
Created pod: neutron-operator-controller-manager-7c95684bcc-qn2dm | |
openstack-operators |
deployment-controller |
neutron-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set neutron-operator-controller-manager-7c95684bcc to 1 | |
openstack-operators |
deployment-controller |
placement-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set placement-operator-controller-manager-569c9576c5 to 1 | |
openstack-operators |
replicaset-controller |
placement-operator-controller-manager-569c9576c5 |
SuccessfulCreate |
Created pod: placement-operator-controller-manager-569c9576c5-4zgfk | |
openstack-operators |
replicaset-controller |
nova-operator-controller-manager-64487ccd4d |
SuccessfulCreate |
Created pod: nova-operator-controller-manager-64487ccd4d-8gqsb | |
openstack-operators |
deployment-controller |
nova-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set nova-operator-controller-manager-64487ccd4d to 1 | |
openstack-operators |
replicaset-controller |
octavia-operator-controller-manager-f456fb6cd |
SuccessfulCreate |
Created pod: octavia-operator-controller-manager-f456fb6cd-nb6ph | |
openstack-operators |
deployment-controller |
octavia-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set octavia-operator-controller-manager-f456fb6cd to 1 | |
openstack-operators |
replicaset-controller |
openstack-baremetal-operator-controller-manager-69958697d7 |
SuccessfulCreate |
Created pod: openstack-baremetal-operator-controller-manager-69958697d76f9td | |
openstack-operators |
deployment-controller |
openstack-baremetal-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-baremetal-operator-controller-manager-69958697d7 to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
deployment-controller |
manila-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set manila-operator-controller-manager-6d78f57554 to 1 | |
openstack-operators |
deployment-controller |
mariadb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set mariadb-operator-controller-manager-7f4856d67b to 1 | |
openstack-operators |
deployment-controller |
ovn-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ovn-operator-controller-manager-f9dd6d5b6 to 1 | |
openstack-operators |
replicaset-controller |
ovn-operator-controller-manager-f9dd6d5b6 |
SuccessfulCreate |
Created pod: ovn-operator-controller-manager-f9dd6d5b6-46wwk | |
openstack-operators |
replicaset-controller |
watcher-operator-controller-manager-7c4579d8cf |
SuccessfulCreate |
Created pod: watcher-operator-controller-manager-7c4579d8cf-pqbbd | |
openstack-operators |
deployment-controller |
watcher-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set watcher-operator-controller-manager-7c4579d8cf to 1 | |
openstack-operators |
multus |
horizon-operator-controller-manager-54969ff695-hgxjt |
AddedInterface |
Add eth0 [10.130.0.57/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-f456fb6cd-nb6ph |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:09deecf840d38ff6af3c924729cf0a9444bc985848bfbe7c918019b88a6bc4d7" | |
openstack-operators |
kubelet |
manila-operator-controller-manager-6d78f57554-t6sj6 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:582f7b1e411961b69f2e3c6b346aa25759b89f7720ed3fade1d363bf5d2dffc8" | |
openstack-operators |
multus |
manila-operator-controller-manager-6d78f57554-t6sj6 |
AddedInterface |
Add eth0 [10.129.0.101/23] from ovn-kubernetes | |
openstack-operators |
multus |
neutron-operator-controller-manager-7c95684bcc-qn2dm |
AddedInterface |
Add eth0 [10.129.0.102/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-f9dd6d5b6-46wwk |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:315e558023b41ac1aa215082096995a03810c5b42910a33b00427ffcac9c6a14" | |
openstack-operators |
multus |
ovn-operator-controller-manager-f9dd6d5b6-46wwk |
AddedInterface |
Add eth0 [10.128.0.114/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-serving-cert-ngptl" | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
openstack-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-manager-6566ff98d5 to 1 | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-manager-6566ff98d5 |
SuccessfulCreate |
Created pod: openstack-operator-controller-manager-6566ff98d5-wbc89 | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-f4487c759-hdfw8 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:79b43a69884631c635d2164b95a2d4ec68f5cb33f96da14764f1c710880f3997" | |
openstack-operators |
multus |
keystone-operator-controller-manager-f4487c759-hdfw8 |
AddedInterface |
Add eth0 [10.128.0.113/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7c95684bcc-qn2dm |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:33652e75a03a058769019fe8d8c51585a6eeefef5e1ecb96f9965434117954f2" | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-6b498574d4-tcqkg |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:ee05f2b06405240a8fcdbd430a9e8983b4667f372548334307b68c154e389960" | |
openstack-operators |
multus |
ironic-operator-controller-manager-6b498574d4-tcqkg |
AddedInterface |
Add eth0 [10.129.0.100/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-7c4579d8cf-pqbbd |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:98a5233f0596591acdf2c6a5838b08be108787cdb6ad1995b2b7886bac0fe6ca" | |
openstack-operators |
kubelet |
test-operator-controller-manager-565dfd7bb9-c6fnn |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:7e584b1c430441c8b6591dadeff32e065de8a185ad37ef90d2e08d37e59aab4a" | |
openstack-operators |
multus |
test-operator-controller-manager-565dfd7bb9-c6fnn |
AddedInterface |
Add eth0 [10.129.0.104/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
infra-operator-controller-manager-d68fd5cdf-sbpvg |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492" | |
openstack-operators |
multus |
infra-operator-controller-manager-d68fd5cdf-sbpvg |
AddedInterface |
Add eth0 [10.128.0.112/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-54969ff695-hgxjt |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:063a7e65b4ba98f0506f269ff7525b446eae06a5ed4a61c18ffa33a886500867" | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7585684bd7-5wlc8 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0" | |
openstack-operators |
multus |
telemetry-operator-controller-manager-7585684bd7-5wlc8 |
AddedInterface |
Add eth0 [10.130.0.62/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
heat-operator-controller-manager-68fc865f87-c8wmp |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:ec11cb8711bd1af22db3c84aa854349ee46191add3db45aecfabb1d8410c04d0" | |
openstack-operators |
kubelet |
glance-operator-controller-manager-59bd97c6b9-s2zqv |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:3cc6bba71197ddf88dd4ba1301542bacbc1fe12e6faab2b69e6960944b3d74a0" | |
(x2) | openstack-operators |
kubelet |
openstack-operator-controller-manager-6566ff98d5-wbc89 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "webhook-server-cert" not found |
openstack-operators |
kubelet |
placement-operator-controller-manager-569c9576c5-4zgfk |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:d33c1f507e1f5b9a4bf226ad98917e92101ac66b36e19d35cbe04ae7014f6bff" | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-qf5hc" | |
openstack-operators |
multus |
glance-operator-controller-manager-59bd97c6b9-s2zqv |
AddedInterface |
Add eth0 [10.128.0.111/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-7f4856d67b-sgjwb |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:47278ed28e02df00892f941763aa0d69547327318e8a983e07f4577acd288167" | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
designate-operator-controller-manager-67d84b9cc-698kz |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:73736f216f886549901fbcfc823b072f73691c9a79ec79e59d100e992b9c1e34" | |
openstack-operators |
kubelet |
swift-operator-controller-manager-6d4f9d7767-dj7p8 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e" | |
openstack-operators |
multus |
swift-operator-controller-manager-6d4f9d7767-dj7p8 |
AddedInterface |
Add eth0 [10.130.0.61/23] from ovn-kubernetes | |
openstack-operators |
deployment-controller |
rabbitmq-cluster-operator-manager |
ScalingReplicaSet |
Scaled up replica set rabbitmq-cluster-operator-manager-84795b7cfd to 1 | |
openstack-operators |
multus |
placement-operator-controller-manager-569c9576c5-4zgfk |
AddedInterface |
Add eth0 [10.128.0.115/23] from ovn-kubernetes | |
openstack-operators |
replicaset-controller |
rabbitmq-cluster-operator-manager-84795b7cfd |
SuccessfulCreate |
Created pod: rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n | |
openshift-marketplace |
kubelet |
community-operators-5zr4r |
Killing |
Stopping container registry-server | |
openstack-operators |
multus |
watcher-operator-controller-manager-7c4579d8cf-pqbbd |
AddedInterface |
Add eth0 [10.128.0.116/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" | |
openstack-operators |
multus |
mariadb-operator-controller-manager-7f4856d67b-sgjwb |
AddedInterface |
Add eth0 [10.130.0.58/23] from ovn-kubernetes | |
openstack-operators |
multus |
rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n |
AddedInterface |
Add eth0 [10.128.0.117/23] from ovn-kubernetes | |
openstack-operators |
multus |
nova-operator-controller-manager-64487ccd4d-8gqsb |
AddedInterface |
Add eth0 [10.130.0.59/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
nova-operator-controller-manager-64487ccd4d-8gqsb |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:b2e9acf568a48c28cf2aed6012e432eeeb7d5f0eb11878fc91b62bc34cba10cd" | |
(x2) | openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-69958697d76f9td |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found |
openstack-operators |
multus |
octavia-operator-controller-manager-f456fb6cd-nb6ph |
AddedInterface |
Add eth0 [10.130.0.60/23] from ovn-kubernetes | |
openstack-operators |
multus |
openstack-baremetal-operator-controller-manager-69958697d76f9td |
AddedInterface |
Add eth0 [10.129.0.103/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-69958697d76f9td |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:a17fc270857869fd1efe5020b2a1cb8c2abbd838f08de88f3a6a59e8754ec351" | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-54969ff695-hgxjt |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:063a7e65b4ba98f0506f269ff7525b446eae06a5ed4a61c18ffa33a886500867" in 4.719s (4.719s including waiting). Image size: 176423625 bytes. | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-658c7b459c-pwzgf |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:783f711b4cb179819cfcb81167c3591c70671440f4551bbe48b7a8730567f577" in 5.028s (5.028s including waiting). Image size: 177294036 bytes. | |
openstack-operators |
kubelet |
manila-operator-controller-manager-6d78f57554-t6sj6 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:582f7b1e411961b69f2e3c6b346aa25759b89f7720ed3fade1d363bf5d2dffc8" in 4.513s (4.513s including waiting). Image size: 177433274 bytes. | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-6b498574d4-tcqkg |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-6b498574d4-tcqkg |
Started |
Started container manager | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-6b498574d4-tcqkg |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-6b498574d4-tcqkg |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:ee05f2b06405240a8fcdbd430a9e8983b4667f372548334307b68c154e389960" in 4.58s (4.58s including waiting). Image size: 177822396 bytes. | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-f456fb6cd-nb6ph |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:09deecf840d38ff6af3c924729cf0a9444bc985848bfbe7c918019b88a6bc4d7" in 4.513s (4.513s including waiting). Image size: 179355335 bytes. | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7c95684bcc-qn2dm |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:33652e75a03a058769019fe8d8c51585a6eeefef5e1ecb96f9965434117954f2" in 4.428s (4.428s including waiting). Image size: 177237705 bytes. | |
openstack-operators |
kubelet |
test-operator-controller-manager-565dfd7bb9-c6fnn |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:7e584b1c430441c8b6591dadeff32e065de8a185ad37ef90d2e08d37e59aab4a" in 4.214s (4.214s including waiting). Image size: 175635124 bytes. | |
openstack-operators |
kubelet |
manila-operator-controller-manager-6d78f57554-t6sj6 |
Started |
Started container manager | |
openstack-operators |
kubelet |
manila-operator-controller-manager-6d78f57554-t6sj6 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-7f4856d67b-sgjwb |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:47278ed28e02df00892f941763aa0d69547327318e8a983e07f4577acd288167" in 4.567s (4.567s including waiting). Image size: 176511950 bytes. | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7c95684bcc-qn2dm |
Created |
Created container: manager | |
openstack-operators |
kubelet |
manila-operator-controller-manager-6d78f57554-t6sj6 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7585684bd7-5wlc8 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0" in 4.192s (4.192s including waiting). Image size: 180600032 bytes. | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7c95684bcc-qn2dm |
Started |
Started container manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-64487ccd4d-8gqsb |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:b2e9acf568a48c28cf2aed6012e432eeeb7d5f0eb11878fc91b62bc34cba10cd" in 4.423s (4.423s including waiting). Image size: 179076256 bytes. | |
openstack-operators |
kubelet |
heat-operator-controller-manager-68fc865f87-c8wmp |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:ec11cb8711bd1af22db3c84aa854349ee46191add3db45aecfabb1d8410c04d0" in 4.839s (4.839s including waiting). Image size: 177764000 bytes. | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-6566ff98d5-wbc89 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:766f6cc606336f5197dc6f7b61bf140b28159516bf388f2ea65ed95013829a1c" | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7c95684bcc-qn2dm |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" | |
openstack-operators |
multus |
openstack-operator-controller-manager-6566ff98d5-wbc89 |
AddedInterface |
Add eth0 [10.129.0.105/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
swift-operator-controller-manager-6d4f9d7767-dj7p8 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e" in 4.345s (4.345s including waiting). Image size: 178374831 bytes. | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-658c7b459c-pwzgf |
Pulled |
Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" already present on machine | |
openstack-operators |
kubelet |
test-operator-controller-manager-565dfd7bb9-c6fnn |
Started |
Started container manager | |
openstack-operators |
test-operator-controller-manager-565dfd7bb9-c6fnn_718cda3e-13a6-4b31-aedc-da690f912b2a |
6cce095b.openstack.org |
LeaderElection |
test-operator-controller-manager-565dfd7bb9-c6fnn_718cda3e-13a6-4b31-aedc-da690f912b2a became leader | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-f456fb6cd-nb6ph |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
mariadb-operator-controller-manager-7f4856d67b-sgjwb_7b0744e0-ca4d-4505-9261-b67e963a263a |
7c2a6c6b.openstack.org |
LeaderElection |
mariadb-operator-controller-manager-7f4856d67b-sgjwb_7b0744e0-ca4d-4505-9261-b67e963a263a became leader | |
openstack-operators |
swift-operator-controller-manager-6d4f9d7767-dj7p8_0973adc1-c757-491b-8ff9-bb68f8ab6d6e |
83821f12.openstack.org |
LeaderElection |
swift-operator-controller-manager-6d4f9d7767-dj7p8_0973adc1-c757-491b-8ff9-bb68f8ab6d6e became leader | |
openstack-operators |
manila-operator-controller-manager-6d78f57554-t6sj6_8d4b36d8-508a-45ef-8187-c83038260ebf |
858862a7.openstack.org |
LeaderElection |
manila-operator-controller-manager-6d78f57554-t6sj6_8d4b36d8-508a-45ef-8187-c83038260ebf became leader | |
openstack-operators |
barbican-operator-controller-manager-658c7b459c-pwzgf_ed5e94ea-43f0-41b0-b888-8854608521e5 |
8cc931b9.openstack.org |
LeaderElection |
barbican-operator-controller-manager-658c7b459c-pwzgf_ed5e94ea-43f0-41b0-b888-8854608521e5 became leader | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-7f4856d67b-sgjwb |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
neutron-operator-controller-manager-7c95684bcc-qn2dm_995552e6-ba1d-43f8-a4a4-3e66512c16fb |
972c7522.openstack.org |
LeaderElection |
neutron-operator-controller-manager-7c95684bcc-qn2dm_995552e6-ba1d-43f8-a4a4-3e66512c16fb became leader | |
openstack-operators |
octavia-operator-controller-manager-f456fb6cd-nb6ph_cf42a07b-9cbb-4ded-95ca-931566383ff3 |
98809e87.openstack.org |
LeaderElection |
octavia-operator-controller-manager-f456fb6cd-nb6ph_cf42a07b-9cbb-4ded-95ca-931566383ff3 became leader | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-f456fb6cd-nb6ph |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-f456fb6cd-nb6ph |
Pulled |
Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" already present on machine | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-69958697d76f9td |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:a17fc270857869fd1efe5020b2a1cb8c2abbd838f08de88f3a6a59e8754ec351" in 2.743s (2.743s including waiting). Image size: 177514341 bytes. | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-f456fb6cd-nb6ph |
Started |
Started container manager | |
openstack-operators |
horizon-operator-controller-manager-54969ff695-hgxjt_0caf234a-21ce-4b18-9f20-07eeeebba7a1 |
5ad2eba0.openstack.org |
LeaderElection |
horizon-operator-controller-manager-54969ff695-hgxjt_0caf234a-21ce-4b18-9f20-07eeeebba7a1 became leader | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-658c7b459c-pwzgf |
Created |
Created container: manager | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-658c7b459c-pwzgf |
Started |
Started container manager | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-f456fb6cd-nb6ph |
Created |
Created container: manager | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-658c7b459c-pwzgf |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-658c7b459c-pwzgf |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-7f4856d67b-sgjwb |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-7f4856d67b-sgjwb |
Pulled |
Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" already present on machine | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-69958697d76f9td |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-69958697d76f9td |
Started |
Started container manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-69958697d76f9td |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" | |
openstack-operators |
heat-operator-controller-manager-68fc865f87-c8wmp_6d6be345-b46b-47df-8112-6e117836df0e |
c3c8b535.openstack.org |
LeaderElection |
heat-operator-controller-manager-68fc865f87-c8wmp_6d6be345-b46b-47df-8112-6e117836df0e became leader | |
openstack-operators |
kubelet |
test-operator-controller-manager-565dfd7bb9-c6fnn |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" | |
openstack-operators |
kubelet |
test-operator-controller-manager-565dfd7bb9-c6fnn |
Created |
Created container: manager | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-7f4856d67b-sgjwb |
Started |
Started container manager | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-7f4856d67b-sgjwb |
Created |
Created container: manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7585684bd7-5wlc8 |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7585684bd7-5wlc8 |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7585684bd7-5wlc8 |
Pulled |
Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" already present on machine | |
openstack-operators |
kubelet |
nova-operator-controller-manager-64487ccd4d-8gqsb |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
nova-operator-controller-manager-64487ccd4d-8gqsb |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7585684bd7-5wlc8 |
Started |
Started container manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7585684bd7-5wlc8 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-54969ff695-hgxjt |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-54969ff695-hgxjt |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-54969ff695-hgxjt |
Pulled |
Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" already present on machine | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-54969ff695-hgxjt |
Started |
Started container manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-54969ff695-hgxjt |
Created |
Created container: manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-64487ccd4d-8gqsb |
Pulled |
Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" already present on machine | |
openstack-operators |
kubelet |
nova-operator-controller-manager-64487ccd4d-8gqsb |
Started |
Started container manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-64487ccd4d-8gqsb |
Created |
Created container: manager | |
openstack-operators |
kubelet |
heat-operator-controller-manager-68fc865f87-c8wmp |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
heat-operator-controller-manager-68fc865f87-c8wmp |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
heat-operator-controller-manager-68fc865f87-c8wmp |
Pulled |
Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" already present on machine | |
openstack-operators |
kubelet |
swift-operator-controller-manager-6d4f9d7767-dj7p8 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
swift-operator-controller-manager-6d4f9d7767-dj7p8 |
Started |
Started container manager | |
openstack-operators |
nova-operator-controller-manager-64487ccd4d-8gqsb_711e0785-4fa2-46f1-bbcf-a7380759d94a |
f33036c1.openstack.org |
LeaderElection |
nova-operator-controller-manager-64487ccd4d-8gqsb_711e0785-4fa2-46f1-bbcf-a7380759d94a became leader | |
openstack-operators |
ironic-operator-controller-manager-6b498574d4-tcqkg_c81e6105-e107-43cd-949e-4219b66aeb84 |
f92b5c2d.openstack.org |
LeaderElection |
ironic-operator-controller-manager-6b498574d4-tcqkg_c81e6105-e107-43cd-949e-4219b66aeb84 became leader | |
openstack-operators |
kubelet |
heat-operator-controller-manager-68fc865f87-c8wmp |
Started |
Started container manager | |
openstack-operators |
telemetry-operator-controller-manager-7585684bd7-5wlc8_bc9df15d-5304-40ef-ab24-0e66f30aab10 |
fa1814a2.openstack.org |
LeaderElection |
telemetry-operator-controller-manager-7585684bd7-5wlc8_bc9df15d-5304-40ef-ab24-0e66f30aab10 became leader | |
openstack-operators |
kubelet |
heat-operator-controller-manager-68fc865f87-c8wmp |
Created |
Created container: manager | |
openstack-operators |
kubelet |
swift-operator-controller-manager-6d4f9d7767-dj7p8 |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
swift-operator-controller-manager-6d4f9d7767-dj7p8 |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
swift-operator-controller-manager-6d4f9d7767-dj7p8 |
Pulled |
Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" already present on machine | |
openstack-operators |
openstack-baremetal-operator-controller-manager-69958697d76f9td_28a79c03-cc02-4b58-b641-0ae20a412609 |
dedc2245.openstack.org |
LeaderElection |
openstack-baremetal-operator-controller-manager-69958697d76f9td_28a79c03-cc02-4b58-b641-0ae20a412609 became leader | |
openstack-operators |
kubelet |
designate-operator-controller-manager-67d84b9cc-698kz |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-f9dd6d5b6-46wwk |
Started |
Started container manager | |
openstack-operators |
placement-operator-controller-manager-569c9576c5-4zgfk_0308d8b7-ffa6-4bce-a434-23ecba5295fc |
73d6b7ce.openstack.org |
LeaderElection |
placement-operator-controller-manager-569c9576c5-4zgfk_0308d8b7-ffa6-4bce-a434-23ecba5295fc became leader | |
openstack-operators |
kubelet |
placement-operator-controller-manager-569c9576c5-4zgfk |
Started |
Started container manager | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-f9dd6d5b6-46wwk |
Created |
Created container: manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-59bd97c6b9-s2zqv |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:3cc6bba71197ddf88dd4ba1301542bacbc1fe12e6faab2b69e6960944b3d74a0" in 7.001s (7.001s including waiting). Image size: 178172604 bytes. | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-f9dd6d5b6-46wwk |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:315e558023b41ac1aa215082096995a03810c5b42910a33b00427ffcac9c6a14" in 6.659s (6.659s including waiting). Image size: 176590998 bytes. | |
openstack-operators |
rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n_bfe2ed80-a055-4fb3-beb7-4e10fbc7c197 |
rabbitmq-cluster-operator-leader-election |
LeaderElection |
rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n_bfe2ed80-a055-4fb3-beb7-4e10fbc7c197 became leader | |
openstack-operators |
kubelet |
placement-operator-controller-manager-569c9576c5-4zgfk |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:d33c1f507e1f5b9a4bf226ad98917e92101ac66b36e19d35cbe04ae7014f6bff" in 6.608s (6.609s including waiting). Image size: 176613087 bytes. | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-7c4579d8cf-pqbbd |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-7c4579d8cf-pqbbd |
Created |
Created container: manager | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-f4487c759-hdfw8 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" | |
openstack-operators |
kubelet |
designate-operator-controller-manager-67d84b9cc-698kz |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:73736f216f886549901fbcfc823b072f73691c9a79ec79e59d100e992b9c1e34" in 7.125s (7.125s including waiting). Image size: 178372833 bytes. | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-f4487c759-hdfw8 |
Started |
Started container manager | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-f4487c759-hdfw8 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-f4487c759-hdfw8 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:79b43a69884631c635d2164b95a2d4ec68f5cb33f96da14764f1c710880f3997" in 6.846s (6.848s including waiting). Image size: 177749453 bytes. | |
openstack-operators |
ovn-operator-controller-manager-f9dd6d5b6-46wwk_b5743668-5202-42be-a036-7016a4087d4c |
90840a60.openstack.org |
LeaderElection |
ovn-operator-controller-manager-f9dd6d5b6-46wwk_b5743668-5202-42be-a036-7016a4087d4c became leader | |
openstack-operators |
infra-operator-controller-manager-d68fd5cdf-sbpvg_ab59dcc4-3a06-43d8-b2c8-f4037f6d216c |
c8c223a1.openstack.org |
LeaderElection |
infra-operator-controller-manager-d68fd5cdf-sbpvg_ab59dcc4-3a06-43d8-b2c8-f4037f6d216c became leader | |
openstack-operators |
kubelet |
designate-operator-controller-manager-67d84b9cc-698kz |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" | |
openstack-operators |
kubelet |
designate-operator-controller-manager-67d84b9cc-698kz |
Started |
Started container manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-7c4579d8cf-pqbbd |
Started |
Started container manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-59bd97c6b9-s2zqv |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" | |
openstack-operators |
designate-operator-controller-manager-67d84b9cc-698kz_c4429690-03eb-4fa6-ae84-09257dbcced6 |
f9497e05.openstack.org |
LeaderElection |
designate-operator-controller-manager-67d84b9cc-698kz_c4429690-03eb-4fa6-ae84-09257dbcced6 became leader | |
openstack-operators |
kubelet |
placement-operator-controller-manager-569c9576c5-4zgfk |
Created |
Created container: manager | |
openstack-operators |
watcher-operator-controller-manager-7c4579d8cf-pqbbd_86f65921-66ff-4d30-b730-f3f8eb3d74d2 |
5049980f.openstack.org |
LeaderElection |
watcher-operator-controller-manager-7c4579d8cf-pqbbd_86f65921-66ff-4d30-b730-f3f8eb3d74d2 became leader | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-7c4579d8cf-pqbbd |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:98a5233f0596591acdf2c6a5838b08be108787cdb6ad1995b2b7886bac0fe6ca" in 6.374s (6.374s including waiting). Image size: 177169608 bytes. | |
openstack-operators |
kubelet |
glance-operator-controller-manager-59bd97c6b9-s2zqv |
Started |
Started container manager | |
openstack-operators |
cinder-operator-controller-manager-5484486656-vvnpp_bfbeea75-01a8-4cd9-a083-4e2b43c3e2b1 |
a6b6a260.openstack.org |
LeaderElection |
cinder-operator-controller-manager-5484486656-vvnpp_bfbeea75-01a8-4cd9-a083-4e2b43c3e2b1 became leader | |
openstack-operators |
kubelet |
glance-operator-controller-manager-59bd97c6b9-s2zqv |
Created |
Created container: manager | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 6.176s (6.176s including waiting). Image size: 176351298 bytes. | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n |
Created |
Created container: operator | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-84795b7cfd-9mg2n |
Started |
Started container operator | |
openstack-operators |
kubelet |
infra-operator-controller-manager-d68fd5cdf-sbpvg |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492" in 6.844s (6.844s including waiting). Image size: 179420336 bytes. | |
openstack-operators |
kubelet |
infra-operator-controller-manager-d68fd5cdf-sbpvg |
Created |
Created container: manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-d68fd5cdf-sbpvg |
Started |
Started container manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-d68fd5cdf-sbpvg |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-f9dd6d5b6-46wwk |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" | |
openstack-operators |
kubelet |
placement-operator-controller-manager-569c9576c5-4zgfk |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" | |
openstack-operators |
glance-operator-controller-manager-59bd97c6b9-s2zqv_4ace0d5f-2a6e-4716-a142-be4290bec3dd |
c569355b.openstack.org |
LeaderElection |
glance-operator-controller-manager-59bd97c6b9-s2zqv_4ace0d5f-2a6e-4716-a142-be4290bec3dd became leader | |
openstack-operators |
keystone-operator-controller-manager-f4487c759-hdfw8_7c82e452-b97b-4bbf-8c06-89de458e2963 |
6012128b.openstack.org |
LeaderElection |
keystone-operator-controller-manager-f4487c759-hdfw8_7c82e452-b97b-4bbf-8c06-89de458e2963 became leader | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5484486656-vvnpp |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5484486656-vvnpp |
Started |
Started container manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5484486656-vvnpp |
Created |
Created container: manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5484486656-vvnpp |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:c487a793648e64af2d64df5f6efbda2d4fd586acd7aee6838d3ec2b3edd9efb9" in 7.293s (7.293s including waiting). Image size: 177610939 bytes. | |
openstack-operators |
kubelet |
manila-operator-controller-manager-6d78f57554-t6sj6 |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-6566ff98d5-wbc89 |
Started |
Started container manager | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-6566ff98d5-wbc89 |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
manila-operator-controller-manager-6d78f57554-t6sj6 |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
test-operator-controller-manager-565dfd7bb9-c6fnn |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 2.835s (2.835s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
test-operator-controller-manager-565dfd7bb9-c6fnn |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
test-operator-controller-manager-565dfd7bb9-c6fnn |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7c95684bcc-qn2dm |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-6566ff98d5-wbc89 |
Pulled |
Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" already present on machine | |
openstack-operators |
kubelet |
manila-operator-controller-manager-6d78f57554-t6sj6 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 3.633s (3.633s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-69958697d76f9td |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 2.681s (2.681s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-6566ff98d5-wbc89 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:766f6cc606336f5197dc6f7b61bf140b28159516bf388f2ea65ed95013829a1c" in 3.871s (3.871s including waiting). Image size: 265163335 bytes. | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7c95684bcc-qn2dm |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 3.655s (3.655s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-6566ff98d5-wbc89 |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-69958697d76f9td |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-69958697d76f9td |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-6b498574d4-tcqkg |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 3.577s (3.577s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-6b498574d4-tcqkg |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-6b498574d4-tcqkg |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7c95684bcc-qn2dm |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-6566ff98d5-wbc89 |
Created |
Created container: manager | |
openstack-operators |
openstack-operator-controller-manager-6566ff98d5-wbc89_a95f7d55-b90a-48dd-b018-05fbff2cf91b |
40ba705e.openstack.org |
LeaderElection |
openstack-operator-controller-manager-6566ff98d5-wbc89_a95f7d55-b90a-48dd-b018-05fbff2cf91b became leader | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5484486656-vvnpp |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5484486656-vvnpp |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
designate-operator-controller-manager-67d84b9cc-698kz |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 2.999s (2.999s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
designate-operator-controller-manager-67d84b9cc-698kz |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
designate-operator-controller-manager-67d84b9cc-698kz |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
placement-operator-controller-manager-569c9576c5-4zgfk |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 2.701s (2.702s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-f4487c759-hdfw8 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 3.141s (3.141s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-f4487c759-hdfw8 |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-f4487c759-hdfw8 |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
placement-operator-controller-manager-569c9576c5-4zgfk |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
placement-operator-controller-manager-569c9576c5-4zgfk |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5484486656-vvnpp |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 3.127s (3.127s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
glance-operator-controller-manager-59bd97c6b9-s2zqv |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
glance-operator-controller-manager-59bd97c6b9-s2zqv |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
glance-operator-controller-manager-59bd97c6b9-s2zqv |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 3.089s (3.089s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
infra-operator-controller-manager-d68fd5cdf-sbpvg |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 3.635s (3.635s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-f9dd6d5b6-46wwk |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
infra-operator-controller-manager-d68fd5cdf-sbpvg |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
infra-operator-controller-manager-d68fd5cdf-sbpvg |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-7c4579d8cf-pqbbd |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 3.744s (3.744s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-7c4579d8cf-pqbbd |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-7c4579d8cf-pqbbd |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-f9dd6d5b6-46wwk |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 3.676s (3.676s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-f9dd6d5b6-46wwk |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "master-1" from revision 5 to 6 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 6"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 5; 2 nodes are at revision 6" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 6" | |
openshift-kube-apiserver |
kubelet |
revision-pruner-6-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-6-master-0 |
AddedInterface |
Add eth0 [10.130.0.63/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-6-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-6-master-0 |
Created |
Created container: pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-6-master-0 |
Started |
Started container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-6-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
revision-pruner-6-master-1 |
AddedInterface |
Add eth0 [10.128.0.118/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-6-master-1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-6-master-1 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-6-master-1 |
Created |
Created container: pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-6-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-6-master-2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-6-master-2 |
AddedInterface |
Add eth0 [10.129.0.106/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-6-master-2 |
Created |
Created container: pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-6-master-2 |
Started |
Started container pruner | |
openshift-marketplace |
kubelet |
redhat-operators-jc2hh |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-jc2hh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-jc2hh |
Created |
Created container: extract-utilities | |
openshift-marketplace |
multus |
redhat-operators-jc2hh |
AddedInterface |
Add eth0 [10.129.0.108/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-jc2hh |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-jc2hh |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-jc2hh |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-jc2hh |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 841ms (841ms including waiting). Image size: 1629241735 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-jc2hh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
redhat-operators-jc2hh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 12.186s (12.186s including waiting). Image size: 911296197 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-jc2hh |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-jc2hh |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-jc2hh |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
kubelet |
redhat-operators-jc2hh |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-w9zcz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-w9zcz |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
multus |
certified-operators-w9zcz |
AddedInterface |
Add eth0 [10.129.0.121/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-w9zcz |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-w9zcz |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-w9zcz |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 2.062s (2.062s including waiting). Image size: 1199160216 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-w9zcz |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-w9zcz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
certified-operators-w9zcz |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-w9zcz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 403ms (403ms including waiting). Image size: 911296197 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-w9zcz |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
certified-operators-w9zcz |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-w9zcz |
Killing |
Stopping container registry-server | |
default |
endpoint-controller |
glance-default-internal |
FailedToCreateEndpoint |
Failed to create endpoint for service openstack/glance-default-internal: endpoints "glance-default-internal" already exists | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
kubelet |
community-operators-g6dqm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-g6dqm |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
multus |
community-operators-g6dqm |
AddedInterface |
Add eth0 [10.129.0.191/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-g6dqm |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-g6dqm |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-g6dqm |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 644ms (644ms including waiting). Image size: 1181047702 bytes. | |
openshift-marketplace |
kubelet |
community-operators-g6dqm |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-g6dqm |
Started |
Started container extract-content | |
openshift-marketplace |
multus |
redhat-marketplace-9llqb |
AddedInterface |
Add eth0 [10.129.0.192/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-9llqb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-9llqb |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-g6dqm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
redhat-marketplace-9llqb |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-9llqb |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-g6dqm |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-g6dqm |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-g6dqm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 463ms (463ms including waiting). Image size: 911296197 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-9llqb |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 942ms (942ms including waiting). Image size: 1057212814 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-9llqb |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-9llqb |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-9llqb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
redhat-marketplace-9llqb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 464ms (464ms including waiting). Image size: 911296197 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-9llqb |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-9llqb |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-g6dqm |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-9llqb |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-6kmvg |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-6kmvg |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-6kmvg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
multus |
redhat-operators-6kmvg |
AddedInterface |
Add eth0 [10.129.0.193/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-6kmvg |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-6kmvg |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-6kmvg |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 712ms (712ms including waiting). Image size: 1629241735 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-6kmvg |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-6kmvg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
redhat-operators-6kmvg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 808ms (809ms including waiting). Image size: 911296197 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-6kmvg |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-6kmvg |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-6kmvg |
Killing |
Stopping container registry-server | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29340825 |
SuccessfulCreate |
Created pod: collect-profiles-29340825-szpzv | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29340825 | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29340825-szpzv |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29340825-szpzv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29340825-szpzv |
AddedInterface |
Add eth0 [10.128.0.192/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29340825-szpzv |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29340825, condition: Complete | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29340825 |
Completed |
Job completed | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
multus |
certified-operators-75z82 |
AddedInterface |
Add eth0 [10.129.0.194/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-75z82 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-75z82 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-75z82 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-75z82 |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-75z82 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-75z82 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 889ms (889ms including waiting). Image size: 1199160216 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-75z82 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-75z82 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
certified-operators-75z82 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 435ms (435ms including waiting). Image size: 911296197 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-75z82 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
certified-operators-75z82 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-75z82 |
Killing |
Stopping container registry-server | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
kubelet |
community-operators-jdhvr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-jdhvr |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-jdhvr |
Created |
Created container: extract-utilities | |
openshift-marketplace |
multus |
community-operators-jdhvr |
AddedInterface |
Add eth0 [10.129.0.195/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-jdhvr |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 713ms (713ms including waiting). Image size: 1181047702 bytes. | |
openshift-marketplace |
kubelet |
community-operators-jdhvr |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-jdhvr |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-jdhvr |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-jdhvr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
community-operators-jdhvr |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-jdhvr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 782ms (782ms including waiting). Image size: 911296197 bytes. | |
openshift-marketplace |
kubelet |
community-operators-jdhvr |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-jdhvr |
Killing |
Stopping container registry-server | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-defrag-controller-defragcontroller |
etcd-operator |
DefragControllerDefragmentAttempt |
Attempting defrag on member: master-0, memberID: 18f50f443f6f157e, dbSize: 259067904, dbInUse: 84332544, leader ID: 6676704299130470762 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-defrag-controller-defragcontroller |
etcd-operator |
DefragControllerDefragmentSuccess |
etcd member has been defragmented: master-0, memberID: 1798360412000818558 | |
openshift-marketplace |
multus |
redhat-marketplace-ssf75 |
AddedInterface |
Add eth0 [10.129.0.196/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-ssf75 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-ssf75 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-ssf75 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-ssf75 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-ssf75 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-ssf75 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 670ms (670ms including waiting). Image size: 1057212814 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-ssf75 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-ssf75 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
redhat-marketplace-ssf75 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-ssf75 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 456ms (456ms including waiting). Image size: 911296197 bytes. | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-defrag-controller-defragcontroller |
etcd-operator |
DefragControllerDefragmentAttempt |
Attempting defrag on member: master-2, memberID: 6af36d024ef3f7a3, dbSize: 258572288, dbInUse: 84291584, leader ID: 6676704299130470762 | |
openshift-marketplace |
kubelet |
redhat-marketplace-ssf75 |
Started |
Started container registry-server | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-defrag-controller-defragcontroller |
etcd-operator |
DefragControllerDefragmentSuccess |
etcd member has been defragmented: master-2, memberID: 7706623244043024291 | |
openshift-marketplace |
kubelet |
redhat-marketplace-ssf75 |
Killing |
Stopping container registry-server | |
openshift-marketplace |
multus |
redhat-operators-vqdj2 |
AddedInterface |
Add eth0 [10.129.0.197/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-vqdj2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-vqdj2 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-vqdj2 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-vqdj2 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-vqdj2 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 1.161s (1.161s including waiting). Image size: 1629241735 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-vqdj2 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-vqdj2 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-vqdj2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-defrag-controller-defragcontroller |
etcd-operator |
DefragControllerDefragmentAttempt |
Attempting defrag on member: master-1, memberID: 5ca86b2f73fec16a, dbSize: 259309568, dbInUse: 84451328, leader ID: 6676704299130470762 | |
openshift-marketplace |
kubelet |
redhat-operators-vqdj2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 4.614s (4.614s including waiting). Image size: 911296197 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-vqdj2 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-vqdj2 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-vqdj2 |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
openshift-marketplace |
kubelet |
redhat-operators-vqdj2 |
Killing |
Stopping container registry-server | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
kubelet |
certified-operators-qljc7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-qljc7 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
multus |
certified-operators-qljc7 |
AddedInterface |
Add eth0 [10.129.0.198/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-qljc7 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-qljc7 |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-qljc7 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-qljc7 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-qljc7 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 930ms (930ms including waiting). Image size: 1199160216 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-qljc7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
certified-operators-qljc7 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-qljc7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 442ms (442ms including waiting). Image size: 911296197 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-qljc7 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
certified-operators-qljc7 |
Killing |
Stopping container registry-server | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29340840 |
SuccessfulCreate |
Created pod: collect-profiles-29340840-w6v9t | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29340840-w6v9t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29340840 | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29340840-w6v9t |
AddedInterface |
Add eth0 [10.128.0.194/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29340840-w6v9t |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29340840-w6v9t |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29340840 |
Completed |
Job completed | |
(x2) | openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29340840, condition: Complete |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulDelete |
Deleted job collect-profiles-29340795 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-1 |
CreatedSCCRanges |
created SCC ranges for openshift-must-gather-zqxwl namespace | |
openshift-marketplace |
multus |
community-operators-hnrf8 |
AddedInterface |
Add eth0 [10.129.0.201/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-hnrf8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-hnrf8 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 753ms (753ms including waiting). Image size: 1181047702 bytes. | |
openshift-marketplace |
kubelet |
community-operators-hnrf8 |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-hnrf8 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-hnrf8 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-hnrf8 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-hnrf8 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-hnrf8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
community-operators-hnrf8 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-hnrf8 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-hnrf8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 1.465s (1.465s including waiting). Image size: 911296197 bytes. | |
openshift-marketplace |
kubelet |
community-operators-hnrf8 |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-cfl42 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
multus |
redhat-marketplace-cfl42 |
AddedInterface |
Add eth0 [10.129.0.202/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-cfl42 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-cfl42 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-cfl42 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-cfl42 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 750ms (750ms including waiting). Image size: 1057212814 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-cfl42 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-cfl42 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-cfl42 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
redhat-marketplace-cfl42 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 825ms (825ms including waiting). Image size: 911296197 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-cfl42 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-cfl42 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-cfl42 |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-cqdz4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine | |
openshift-marketplace |
multus |
redhat-operators-cqdz4 |
AddedInterface |
Add eth0 [10.129.0.203/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-cqdz4 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-cqdz4 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-cqdz4 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-cqdz4 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 943ms (943ms including waiting). Image size: 1629241735 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-cqdz4 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-cqdz4 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-cqdz4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" | |
openshift-marketplace |
kubelet |
redhat-operators-cqdz4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 848ms (848ms including waiting). Image size: 911296197 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-cqdz4 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-cqdz4 |
Started |
Started container registry-server |