| Time | Namespace | Component | RelatedObject | Reason | Message |
|---|---|---|---|---|---|
openshift-authentication |
oauth-openshift-5b466d87-4hv4r |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-5b466d87-4hv4r to master-0 | ||
openshift-controller-manager |
controller-manager-5dcd7f7489-ts824 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-5dcd7f7489-ts824 to master-0 | ||
openshift-authentication |
oauth-openshift-5b466d87-4hv4r |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-5b466d87-4hv4r |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-operators |
obo-prometheus-operator-668cf9dfbb-tllrd |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-668cf9dfbb-tllrd to master-0 | ||
openshift-authentication |
oauth-openshift-54f5d5c856-4hhmn |
FailedScheduling |
skip schedule deleting pod: openshift-authentication/oauth-openshift-54f5d5c856-4hhmn | ||
openshift-authentication |
oauth-openshift-54f5d5c856-4hhmn |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-74c6d8bb8-ckdnn |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-74c6d8bb8-ckdnn to master-0 | ||
cert-manager |
cert-manager-86cb77c54b-db45f |
Scheduled |
Successfully assigned cert-manager/cert-manager-86cb77c54b-db45f to master-0 | ||
openstack-operators |
watcher-operator-controller-manager-6b9b669fdb-jsphj |
Scheduled |
Successfully assigned openstack-operators/watcher-operator-controller-manager-6b9b669fdb-jsphj to master-0 | ||
openstack-operators |
test-operator-controller-manager-57dfcdd5b8-twtzz |
Scheduled |
Successfully assigned openstack-operators/test-operator-controller-manager-57dfcdd5b8-twtzz to master-0 | ||
openstack-operators |
telemetry-operator-controller-manager-7b5867bfc7-tfj67 |
Scheduled |
Successfully assigned openstack-operators/telemetry-operator-controller-manager-7b5867bfc7-tfj67 to master-0 | ||
openstack-operators |
swift-operator-controller-manager-696b999796-p6w8q |
Scheduled |
Successfully assigned openstack-operators/swift-operator-controller-manager-696b999796-p6w8q to master-0 | ||
openstack-operators |
rabbitmq-cluster-operator-manager-78955d896f-94dl9 |
Scheduled |
Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-78955d896f-94dl9 to master-0 | ||
openstack-operators |
placement-operator-controller-manager-6b64f6f645-rddjv |
Scheduled |
Successfully assigned openstack-operators/placement-operator-controller-manager-6b64f6f645-rddjv to master-0 | ||
cert-manager |
cert-manager-cainjector-855d9ccff4-lh4km |
Scheduled |
Successfully assigned cert-manager/cert-manager-cainjector-855d9ccff4-lh4km to master-0 | ||
openstack-operators |
ovn-operator-controller-manager-647f96877-75x24 |
Scheduled |
Successfully assigned openstack-operators/ovn-operator-controller-manager-647f96877-75x24 to master-0 | ||
openstack-operators |
openstack-operator-index-jg8xj |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-index-jg8xj to master-0 | ||
openstack-operators |
openstack-operator-controller-operator-589d7b4556-d9qth |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-operator-589d7b4556-d9qth to master-0 | ||
openstack-operators |
openstack-operator-controller-operator-55b6fb9447-zn55t |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-operator-55b6fb9447-zn55t to master-0 | ||
openstack-operators |
openstack-operator-controller-manager-599cfccd85-dgvwj |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-manager-599cfccd85-dgvwj to master-0 | ||
openstack-operators |
openstack-baremetal-operator-controller-manager-6f998f5746f9gjr |
Scheduled |
Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-6f998f5746f9gjr to master-0 | ||
openstack-operators |
octavia-operator-controller-manager-845b79dc4f-rs4fz |
Scheduled |
Successfully assigned openstack-operators/octavia-operator-controller-manager-845b79dc4f-rs4fz to master-0 | ||
cert-manager |
cert-manager-webhook-f4fb5df64-42npf |
Scheduled |
Successfully assigned cert-manager/cert-manager-webhook-f4fb5df64-42npf to master-0 | ||
openstack-operators |
nova-operator-controller-manager-865fc86d5b-fd78q |
Scheduled |
Successfully assigned openstack-operators/nova-operator-controller-manager-865fc86d5b-fd78q to master-0 | ||
openstack-operators |
neutron-operator-controller-manager-7cdd6b54fb-4jl24 |
Scheduled |
Successfully assigned openstack-operators/neutron-operator-controller-manager-7cdd6b54fb-4jl24 to master-0 | ||
openstack-operators |
mariadb-operator-controller-manager-647d75769b-8dqxm |
Scheduled |
Successfully assigned openstack-operators/mariadb-operator-controller-manager-647d75769b-8dqxm to master-0 | ||
openstack-operators |
manila-operator-controller-manager-56f9fbf74b-hq5jr |
Scheduled |
Successfully assigned openstack-operators/manila-operator-controller-manager-56f9fbf74b-hq5jr to master-0 | ||
openstack-operators |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw |
Scheduled |
Successfully assigned openstack-operators/keystone-operator-controller-manager-58b8dcc5fb-bpmdw to master-0 | ||
openstack-operators |
ironic-operator-controller-manager-7c9bfd6967-782xf |
Scheduled |
Successfully assigned openstack-operators/ironic-operator-controller-manager-7c9bfd6967-782xf to master-0 | ||
openstack-operators |
infra-operator-controller-manager-7d9c9d7fd8-f228s |
Scheduled |
Successfully assigned openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f228s to master-0 | ||
openstack-operators |
horizon-operator-controller-manager-f6cc97788-dtxsw |
Scheduled |
Successfully assigned openstack-operators/horizon-operator-controller-manager-f6cc97788-dtxsw to master-0 | ||
openstack-operators |
heat-operator-controller-manager-7fd96594c7-xnrjg |
Scheduled |
Successfully assigned openstack-operators/heat-operator-controller-manager-7fd96594c7-xnrjg to master-0 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29414205-lw5ls |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29414205-lw5ls to master-0 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29414190-mgzzv |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29414190-mgzzv to master-0 | ||
openstack-operators |
glance-operator-controller-manager-78cd4f7769-58c6v |
Scheduled |
Successfully assigned openstack-operators/glance-operator-controller-manager-78cd4f7769-58c6v to master-0 | ||
openstack-operators |
designate-operator-controller-manager-84bc9f68f5-jjlq7 |
Scheduled |
Successfully assigned openstack-operators/designate-operator-controller-manager-84bc9f68f5-jjlq7 to master-0 | ||
openstack-operators |
cinder-operator-controller-manager-f8856dd79-mfhwn |
Scheduled |
Successfully assigned openstack-operators/cinder-operator-controller-manager-f8856dd79-mfhwn to master-0 | ||
openstack-operators |
barbican-operator-controller-manager-5cd89994b5-974hd |
Scheduled |
Successfully assigned openstack-operators/barbican-operator-controller-manager-5cd89994b5-974hd to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-74c6d8bb8-jqkr2 |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-74c6d8bb8-jqkr2 to master-0 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29414175-675jb |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29414175-675jb to master-0 | ||
openshift-console |
console-6fbd8c7bd5-6tskd |
Scheduled |
Successfully assigned openshift-console/console-6fbd8c7bd5-6tskd to master-0 | ||
openshift-marketplace |
redhat-marketplace-7j7ql |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-7j7ql to master-0 | ||
openshift-marketplace |
redhat-marketplace-9mxhg |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-9mxhg to master-0 | ||
openshift-marketplace |
redhat-marketplace-fhdw5 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-fhdw5 to master-0 | ||
openshift-marketplace |
redhat-marketplace-g7j5b |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-g7j5b to master-0 | ||
openstack-operators |
917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafcwbgj |
Scheduled |
Successfully assigned openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafcwbgj to master-0 | ||
openshift-storage |
vg-manager-d999v |
Scheduled |
Successfully assigned openshift-storage/vg-manager-d999v to master-0 | ||
openshift-storage |
lvms-operator-67f88ff75f-5j2p2 |
Scheduled |
Successfully assigned openshift-storage/lvms-operator-67f88ff75f-5j2p2 to master-0 | ||
openshift-marketplace |
redhat-marketplace-6vhlg |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-6vhlg to master-0 | ||
openshift-marketplace |
redhat-marketplace-2xqkk |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-2xqkk to master-0 | ||
openshift-operators |
observability-operator-d8bb48f5d-wc4bd |
Scheduled |
Successfully assigned openshift-operators/observability-operator-d8bb48f5d-wc4bd to master-0 | ||
openshift-monitoring |
telemeter-client-7487d49bdb-7f2xj |
Scheduled |
Successfully assigned openshift-monitoring/telemeter-client-7487d49bdb-7f2xj to master-0 | ||
openshift-authentication |
oauth-openshift-d676f96d8-88p47 |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-d676f96d8-88p47 to master-0 | ||
openshift-cluster-machine-approver |
machine-approver-74d9cbffbc-9c59x |
Scheduled |
Successfully assigned openshift-cluster-machine-approver/machine-approver-74d9cbffbc-9c59x to master-0 | ||
openshift-multus |
multus-admission-controller-77d4cb9fc-5x5q5 |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-77d4cb9fc-5x5q5 to master-0 | ||
openshift-controller-manager |
controller-manager-b644f86d6-mvlh8 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-b644f86d6-mvlh8 to master-0 | ||
openshift-network-console |
networking-console-plugin-7d45bf9455-w67z7 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-controller-manager |
controller-manager-b644f86d6-mvlh8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-b644f86d6-mvlh8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-network-console |
networking-console-plugin-7d45bf9455-w67z7 |
Scheduled |
Successfully assigned openshift-network-console/networking-console-plugin-7d45bf9455-w67z7 to master-0 | ||
openshift-console |
console-6dc95c8d8-klv7m |
Scheduled |
Successfully assigned openshift-console/console-6dc95c8d8-klv7m to master-0 | ||
openshift-marketplace |
redhat-marketplace-ndng9 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-ndng9 to master-0 | ||
openshift-marketplace |
redhat-marketplace-pdr77 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-pdr77 to master-0 | ||
openshift-marketplace |
redhat-marketplace-rq9mp |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-rq9mp to master-0 | ||
openshift-marketplace |
redhat-marketplace-svhl4 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-svhl4 to master-0 | ||
openshift-marketplace |
redhat-marketplace-xsdsw |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-xsdsw to master-0 | ||
openshift-marketplace |
redhat-marketplace-zjmf2 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-zjmf2 to master-0 | ||
openshift-console |
console-75b84c855f-2zcgd |
Scheduled |
Successfully assigned openshift-console/console-75b84c855f-2zcgd to master-0 | ||
openshift-network-diagnostics |
network-check-source-85d8db45d4-5bjlq |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-network-diagnostics |
network-check-source-85d8db45d4-5bjlq |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-network-diagnostics |
network-check-source-85d8db45d4-5bjlq |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-network-diagnostics |
network-check-source-85d8db45d4-5bjlq |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-source-85d8db45d4-5bjlq to master-0 | ||
openshift-machine-config-operator |
machine-config-daemon-8jwk5 |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-8jwk5 to master-0 | ||
openshift-machine-config-operator |
machine-config-server-lh6sx |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-server-lh6sx to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-74fbf9d4cf-77szk |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-74fbf9d4cf-77szk to master-0 | ||
openshift-marketplace |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ag6hs7 |
Scheduled |
Successfully assigned openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ag6hs7 to master-0 | ||
openshift-console-operator |
console-operator-54dbc87ccb-n8qgg |
Scheduled |
Successfully assigned openshift-console-operator/console-operator-54dbc87ccb-n8qgg to master-0 | ||
metallb-system |
frr-k8s-ckzx9 |
Scheduled |
Successfully assigned metallb-system/frr-k8s-ckzx9 to master-0 | ||
openshift-marketplace |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fzc2gr |
Scheduled |
Successfully assigned openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fzc2gr to master-0 | ||
openshift-marketplace |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921049l57 |
Scheduled |
Successfully assigned openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921049l57 to master-0 | ||
openshift-marketplace |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42bd6t |
Scheduled |
Successfully assigned openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42bd6t to master-0 | ||
openshift-marketplace |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83265nd |
Scheduled |
Successfully assigned openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83265nd to master-0 | ||
openshift-marketplace |
certified-operators-dr5sc |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-dr5sc to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-74fbf9d4cf-77szk |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-marketplace |
certified-operators-dwgbt |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-dwgbt to master-0 | ||
metallb-system |
controller-f8648f98b-v2fsc |
Scheduled |
Successfully assigned metallb-system/controller-f8648f98b-v2fsc to master-0 | ||
openshift-console |
downloads-69cd4c69bf-wlssv |
Scheduled |
Successfully assigned openshift-console/downloads-69cd4c69bf-wlssv to master-0 | ||
metallb-system |
frr-k8s-webhook-server-7fcb986d4-hfnqb |
Scheduled |
Successfully assigned metallb-system/frr-k8s-webhook-server-7fcb986d4-hfnqb to master-0 | ||
openshift-marketplace |
certified-operators-kgvrg |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-kgvrg to master-0 | ||
openshift-marketplace |
certified-operators-knvdv |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-knvdv to master-0 | ||
openshift-marketplace |
certified-operators-w4cqh |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-w4cqh to master-0 | ||
openshift-marketplace |
community-operators-4gtrh |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-4gtrh to master-0 | ||
openshift-cloud-controller-manager-operator |
cluster-cloud-controller-manager-operator-758cf9d97b-gcfhw |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-758cf9d97b-gcfhw to master-0 | ||
openshift-nmstate |
nmstate-handler-qthsq |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-handler-qthsq to master-0 | ||
openshift-authentication |
oauth-openshift-5b466d87-4hv4r |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-operators |
perses-operator-5446b9c989-mcgjr |
Scheduled |
Successfully assigned openshift-operators/perses-operator-5446b9c989-mcgjr to master-0 | ||
openshift-monitoring |
thanos-querier-6c5fbf6b84-vvhts |
Scheduled |
Successfully assigned openshift-monitoring/thanos-querier-6c5fbf6b84-vvhts to master-0 | ||
openshift-controller-manager |
controller-manager-5dcd7f7489-ts824 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-7c85c4dffd-xv2wn |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-7c85c4dffd-xv2wn to master-0 | ||
openshift-monitoring |
prometheus-operator-admission-webhook-7c85c4dffd-xv2wn |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-7c85c4dffd-xv2wn |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-7c85c4dffd-xv2wn |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-6c74d9cb9f-pxd98 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-6c74d9cb9f-pxd98 to master-0 | ||
openshift-monitoring |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | ||
openshift-monitoring |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | ||
openshift-monitoring |
openshift-state-metrics-5974b6b869-5fzg8 |
Scheduled |
Successfully assigned openshift-monitoring/openshift-state-metrics-5974b6b869-5fzg8 to master-0 | ||
openshift-monitoring |
node-exporter-z89ck |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-z89ck to master-0 | ||
openshift-monitoring |
monitoring-plugin-58f547f9c9-wnpsq |
Scheduled |
Successfully assigned openshift-monitoring/monitoring-plugin-58f547f9c9-wnpsq to master-0 | ||
openshift-monitoring |
metrics-server-88f9c775c-fw4ls |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-88f9c775c-fw4ls to master-0 | ||
openshift-monitoring |
kube-state-metrics-5857974f64-4rstk |
Scheduled |
Successfully assigned openshift-monitoring/kube-state-metrics-5857974f64-4rstk to master-0 | ||
openshift-marketplace |
community-operators-722qv |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-722qv to master-0 | ||
openshift-ingress-canary |
ingress-canary-pz4lp |
Scheduled |
Successfully assigned openshift-ingress-canary/ingress-canary-pz4lp to master-0 | ||
metallb-system |
speaker-868rl |
Scheduled |
Successfully assigned metallb-system/speaker-868rl to master-0 | ||
openshift-nmstate |
nmstate-console-plugin-7fbb5f6569-gdhpp |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-console-plugin-7fbb5f6569-gdhpp to master-0 | ||
openshift-ingress |
router-default-5465c8b4db-58d52 |
Scheduled |
Successfully assigned openshift-ingress/router-default-5465c8b4db-58d52 to master-0 | ||
openshift-ingress |
router-default-5465c8b4db-58d52 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-ingress |
router-default-5465c8b4db-58d52 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-ingress |
router-default-5465c8b4db-58d52 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-machine-config-operator |
machine-config-controller-7c6d64c4cd-5wrwt |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-controller-7c6d64c4cd-5wrwt to master-0 | ||
openshift-nmstate |
nmstate-metrics-7f946cbc9-xmkjr |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-metrics-7f946cbc9-xmkjr to master-0 | ||
openshift-nmstate |
nmstate-operator-5b5b58f5c8-kv2p5 |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-operator-5b5b58f5c8-kv2p5 to master-0 | ||
openshift-nmstate |
nmstate-webhook-5f6d4c5ccb-zg9dh |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-webhook-5f6d4c5ccb-zg9dh to master-0 | ||
openshift-image-registry |
node-ca-g49xm |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-g49xm to master-0 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29414160-dmjlv |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29414160-dmjlv to master-0 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29414160-dmjlv |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
metallb-system |
metallb-operator-webhook-server-567cbcbb98-h2n4q |
Scheduled |
Successfully assigned metallb-system/metallb-operator-webhook-server-567cbcbb98-h2n4q to master-0 | ||
openshift-console |
console-654c77b6c6-kh7ws |
Scheduled |
Successfully assigned openshift-console/console-654c77b6c6-kh7ws to master-0 | ||
openshift-marketplace |
community-operators-tzlrc |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-tzlrc to master-0 | ||
openshift-marketplace |
community-operators-tnlxw |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-tnlxw to master-0 | ||
openshift-marketplace |
redhat-operators-826mk |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-826mk to master-0 | ||
openshift-marketplace |
redhat-operators-97qj5 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-97qj5 to master-0 | ||
openshift-marketplace |
redhat-operators-b7n8z |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-b7n8z to master-0 | ||
openshift-marketplace |
redhat-operators-brsq4 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-brsq4 to master-0 | ||
openshift-marketplace |
redhat-operators-gzs5w |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-gzs5w to master-0 | ||
openshift-marketplace |
redhat-operators-jjfcc |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-jjfcc to master-0 | ||
openshift-marketplace |
redhat-operators-mwsqm |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-mwsqm to master-0 | ||
openshift-console |
console-64cdd44ddd-2t62p |
Scheduled |
Successfully assigned openshift-console/console-64cdd44ddd-2t62p to master-0 | ||
openshift-marketplace |
redhat-operators-nlk69 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-nlk69 to master-0 | ||
openshift-marketplace |
redhat-operators-p5ndf |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-p5ndf to master-0 | ||
openshift-marketplace |
redhat-operators-rxp4h |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-rxp4h to master-0 | ||
openshift-marketplace |
redhat-operators-v2gh5 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-v2gh5 to master-0 | ||
openshift-marketplace |
redhat-operators-v4tlj |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-v4tlj to master-0 | ||
openshift-marketplace |
redhat-marketplace-7bh6j |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-7bh6j to master-0 | ||
openshift-console |
console-648b6fc966-ml84x |
Scheduled |
Successfully assigned openshift-console/console-648b6fc966-ml84x to master-0 | ||
metallb-system |
metallb-operator-controller-manager-6f8cddc44c-f979v |
Scheduled |
Successfully assigned metallb-system/metallb-operator-controller-manager-6f8cddc44c-f979v to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-557cff67c-7qs6t |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-557cff67c-7qs6t |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-marketplace |
redhat-operators-wgn8s |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-wgn8s to master-0 | ||
openshift-marketplace |
redhat-operators-xgzlt |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-xgzlt to master-0 | ||
openshift-monitoring |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | ||
openshift-monitoring |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-557cff67c-7qs6t |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-557cff67c-7qs6t to master-0 | ||
openshift-console |
console-6475766b4d-m2nml |
Scheduled |
Successfully assigned openshift-console/console-6475766b4d-m2nml to master-0 | ||
openshift-marketplace |
community-operators-fvf4r |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-fvf4r to master-0 | ||
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_5ca7dc2c-05cc-4f72-9201-94bb213087fa became leader | |
kube-system |
cluster-policy-controller |
bootstrap-kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster) | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_ea67c107-0b6f-4348-8168-37688f5cf571 became leader | |
kube-system |
Required control plane pods have been created | ||||
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_4d0c4803-a638-4911-8af9-db71a69bb531 became leader | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for default namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-node-lease namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-public namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-system namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-version namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-etcd namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for assisted-installer namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-credential-operator namespace | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_580e1a00-5185-412b-b7b0-8aab482235fa became leader | |
assisted-installer |
job-controller |
assisted-installer-controller |
SuccessfulCreate |
Created pod: assisted-installer-controller-lj5rn | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-operator namespace | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_c61b9c51-689e-4779-8540-4c140f336d37 became leader | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-77dfcc565f to 1 | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_87cbe5e2-b68d-44c6-9a7c-9634cfb7073d became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_90606d2c-c56e-4c2c-bba2-d1d365b5695f became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" architecture="amd64" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-network-config-controller namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-storage-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-etcd-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-marketplace namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-node-tuning-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-machine-approver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-insights namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-csi-drivers namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-authentication-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-samples-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-image-registry namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-machine-config-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-dns-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-openstack-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-olm-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operator-lifecycle-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kni-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operators namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ovirt-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-vsphere-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-nutanix-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-platform-infra namespace | |
openshift-cluster-olm-operator |
deployment-controller |
cluster-olm-operator |
ScalingReplicaSet |
Scaled up replica set cluster-olm-operator-56fcb6cc5f to 1 | |
openshift-dns-operator |
deployment-controller |
dns-operator |
ScalingReplicaSet |
Scaled up replica set dns-operator-7c56cf9b74 to 1 | |
openshift-kube-storage-version-migrator-operator |
deployment-controller |
kube-storage-version-migrator-operator |
ScalingReplicaSet |
Scaled up replica set kube-storage-version-migrator-operator-b9c5dfc78 to 1 | |
openshift-kube-scheduler-operator |
deployment-controller |
openshift-kube-scheduler-operator |
ScalingReplicaSet |
Scaled up replica set openshift-kube-scheduler-operator-5f85974995 to 1 | |
openshift-network-operator |
deployment-controller |
network-operator |
ScalingReplicaSet |
Scaled up replica set network-operator-79767b7ff9 to 1 | |
openshift-kube-controller-manager-operator |
deployment-controller |
kube-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set kube-controller-manager-operator-848f645654 to 1 | |
openshift-apiserver-operator |
deployment-controller |
openshift-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set openshift-apiserver-operator-7bf7f6b755 to 1 | |
openshift-service-ca-operator |
deployment-controller |
service-ca-operator |
ScalingReplicaSet |
Scaled up replica set service-ca-operator-77758bc754 to 1 | |
openshift-controller-manager-operator |
deployment-controller |
openshift-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set openshift-controller-manager-operator-6c8676f99d to 1 | |
openshift-marketplace |
deployment-controller |
marketplace-operator |
ScalingReplicaSet |
Scaled up replica set marketplace-operator-f797b99b6 to 1 | |
openshift-authentication-operator |
deployment-controller |
authentication-operator |
ScalingReplicaSet |
Scaled up replica set authentication-operator-6c968fdfdf to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-monitoring namespace | |
openshift-etcd-operator |
deployment-controller |
etcd-operator |
ScalingReplicaSet |
Scaled up replica set etcd-operator-5bf4d88c6f to 1 | |
| (x2) | openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found |
| (x14) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-77dfcc565f |
FailedCreate |
Error creating: pods "cluster-version-operator-77dfcc565f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-user-workload-monitoring namespace | |
| (x10) | assisted-installer |
default-scheduler |
assisted-installer-controller-lj5rn |
FailedScheduling |
no nodes available to schedule pods |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config-managed namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config namespace | |
| (x12) | openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-56fcb6cc5f |
FailedCreate |
Error creating: pods "cluster-olm-operator-56fcb6cc5f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-b9c5dfc78 |
FailedCreate |
Error creating: pods "kube-storage-version-migrator-operator-b9c5dfc78-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-5f85974995 |
FailedCreate |
Error creating: pods "openshift-kube-scheduler-operator-5f85974995-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-machine-api namespace | |
| (x12) | openshift-network-operator |
replicaset-controller |
network-operator-79767b7ff9 |
FailedCreate |
Error creating: pods "network-operator-79767b7ff9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-dns-operator |
replicaset-controller |
dns-operator-7c56cf9b74 |
FailedCreate |
Error creating: pods "dns-operator-7c56cf9b74-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-7bf7f6b755 |
FailedCreate |
Error creating: pods "openshift-apiserver-operator-7bf7f6b755-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller-operator |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-operator-6bc8656fdc to 1 | |
| (x12) | openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-848f645654 |
FailedCreate |
Error creating: pods "kube-controller-manager-operator-848f645654-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-77758bc754 |
FailedCreate |
Error creating: pods "service-ca-operator-77758bc754-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-6c8676f99d |
FailedCreate |
Error creating: pods "openshift-controller-manager-operator-6c8676f99d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-marketplace |
replicaset-controller |
marketplace-operator-f797b99b6 |
FailedCreate |
Error creating: pods "marketplace-operator-f797b99b6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-node-tuning-operator |
deployment-controller |
cluster-node-tuning-operator |
ScalingReplicaSet |
Scaled up replica set cluster-node-tuning-operator-85cff47f46 to 1 | |
openshift-monitoring |
deployment-controller |
cluster-monitoring-operator |
ScalingReplicaSet |
Scaled up replica set cluster-monitoring-operator-7ff994598c to 1 | |
| (x12) | openshift-etcd-operator |
replicaset-controller |
etcd-operator-5bf4d88c6f |
FailedCreate |
Error creating: pods "etcd-operator-5bf4d88c6f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-authentication-operator |
replicaset-controller |
authentication-operator-6c968fdfdf |
FailedCreate |
Error creating: pods "authentication-operator-6c968fdfdf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-ingress-operator |
deployment-controller |
ingress-operator |
ScalingReplicaSet |
Scaled up replica set ingress-operator-8649c48786 to 1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
package-server-manager |
ScalingReplicaSet |
Scaled up replica set package-server-manager-67477646d4 to 1 | |
| (x10) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-85cff47f46 |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-85cff47f46-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-kube-apiserver-operator |
deployment-controller |
kube-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set kube-apiserver-operator-765d9ff747 to 1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
catalog-operator |
ScalingReplicaSet |
Scaled up replica set catalog-operator-fbc6455c4 to 1 | |
| (x10) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-7ff994598c |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-7ff994598c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-image-registry |
deployment-controller |
cluster-image-registry-operator |
ScalingReplicaSet |
Scaled up replica set cluster-image-registry-operator-6fb9f88b7 to 1 | |
| (x9) | openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-765d9ff747 |
FailedCreate |
Error creating: pods "kube-apiserver-operator-765d9ff747-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-operator-lifecycle-manager |
deployment-controller |
olm-operator |
ScalingReplicaSet |
Scaled up replica set olm-operator-7cd7dbb44c to 1 | |
default |
apiserver |
openshift-kube-apiserver |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
| (x8) | openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-6fb9f88b7 |
FailedCreate |
Error creating: pods "cluster-image-registry-operator-6fb9f88b7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
apiserver |
openshift-kube-apiserver |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
| (x8) | openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-fbc6455c4 |
FailedCreate |
Error creating: pods "catalog-operator-fbc6455c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-7cd7dbb44c |
FailedCreate |
Error creating: pods "olm-operator-7cd7dbb44c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
apiserver |
openshift-kube-apiserver |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
kube-system |
Required control plane pods have been created | ||||
| (x11) | openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-6bc8656fdc |
FailedCreate |
Error creating: pods "csi-snapshot-controller-operator-6bc8656fdc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
apiserver |
openshift-kube-apiserver |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
| (x10) | openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-67477646d4 |
FailedCreate |
Error creating: pods "package-server-manager-67477646d4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
apiserver |
openshift-kube-apiserver |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-config-operator |
deployment-controller |
openshift-config-operator |
ScalingReplicaSet |
Scaled up replica set openshift-config-operator-68758cbcdb to 1 | |
| (x7) | openshift-config-operator |
replicaset-controller |
openshift-config-operator-68758cbcdb |
FailedCreate |
Error creating: pods "openshift-config-operator-68758cbcdb-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-ingress-operator |
replicaset-controller |
ingress-operator-8649c48786 |
FailedCreate |
Error creating: pods "ingress-operator-8649c48786-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_0889c1ce-7b05-4e14-b3e2-55a51613a385 became leader | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_f919e853-37fd-4b23-b6a2-661808f74fbf became leader | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
| (x5) | assisted-installer |
default-scheduler |
assisted-installer-controller-lj5rn |
FailedScheduling |
no nodes available to schedule pods |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_31c286dc-2841-40de-9203-3f9b886c360b became leader | |
| (x6) | openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-56fcb6cc5f |
FailedCreate |
Error creating: pods "cluster-olm-operator-56fcb6cc5f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-7bf7f6b755 |
FailedCreate |
Error creating: pods "openshift-apiserver-operator-7bf7f6b755-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-7ff994598c |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-7ff994598c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found | |
| (x6) | openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-765d9ff747 |
FailedCreate |
Error creating: pods "kube-apiserver-operator-765d9ff747-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-ingress-operator |
replicaset-controller |
ingress-operator-8649c48786 |
FailedCreate |
Error creating: pods "ingress-operator-8649c48786-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-6bc8656fdc |
FailedCreate |
Error creating: pods "csi-snapshot-controller-operator-6bc8656fdc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-config-operator |
replicaset-controller |
openshift-config-operator-68758cbcdb |
FailedCreate |
Error creating: pods "openshift-config-operator-68758cbcdb-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-77dfcc565f |
FailedCreate |
Error creating: pods "cluster-version-operator-77dfcc565f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-dns-operator |
replicaset-controller |
dns-operator-7c56cf9b74 |
FailedCreate |
Error creating: pods "dns-operator-7c56cf9b74-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-b9c5dfc78 |
FailedCreate |
Error creating: pods "kube-storage-version-migrator-operator-b9c5dfc78-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-85cff47f46 |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-85cff47f46-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x4) | openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-77758bc754 |
FailedCreate |
Error creating: pods "service-ca-operator-77758bc754-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-848f645654 |
FailedCreate |
Error creating: pods "kube-controller-manager-operator-848f645654-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-fbc6455c4 |
FailedCreate |
Error creating: pods "catalog-operator-fbc6455c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-6c8676f99d |
FailedCreate |
Error creating: pods "openshift-controller-manager-operator-6c8676f99d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-etcd-operator |
replicaset-controller |
etcd-operator-5bf4d88c6f |
FailedCreate |
Error creating: pods "etcd-operator-5bf4d88c6f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-6fb9f88b7 |
FailedCreate |
Error creating: pods "cluster-image-registry-operator-6fb9f88b7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-authentication-operator |
replicaset-controller |
authentication-operator-6c968fdfdf |
FailedCreate |
Error creating: pods "authentication-operator-6c968fdfdf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-5f85974995 |
FailedCreate |
Error creating: pods "openshift-kube-scheduler-operator-5f85974995-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-dns-operator |
default-scheduler |
dns-operator-7c56cf9b74-xz27r |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
| (x6) | openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-7cd7dbb44c |
FailedCreate |
Error creating: pods "olm-operator-7cd7dbb44c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-6bc8656fdc |
SuccessfulCreate |
Created pod: csi-snapshot-controller-operator-6bc8656fdc-2q7f5 | |
openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-77758bc754 |
SuccessfulCreate |
Created pod: service-ca-operator-77758bc754-8smqn | |
| (x6) | openshift-network-operator |
replicaset-controller |
network-operator-79767b7ff9 |
FailedCreate |
Error creating: pods "network-operator-79767b7ff9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-marketplace |
replicaset-controller |
marketplace-operator-f797b99b6 |
FailedCreate |
Error creating: pods "marketplace-operator-f797b99b6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-67477646d4 |
FailedCreate |
Error creating: pods "package-server-manager-67477646d4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-56fcb6cc5f |
SuccessfulCreate |
Created pod: cluster-olm-operator-56fcb6cc5f-4xwp2 | |
openshift-cluster-olm-operator |
default-scheduler |
cluster-olm-operator-56fcb6cc5f-4xwp2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-7bf7f6b755 |
SuccessfulCreate |
Created pod: openshift-apiserver-operator-7bf7f6b755-sh6qf | |
openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-85cff47f46 |
SuccessfulCreate |
Created pod: cluster-node-tuning-operator-85cff47f46-4gv5j | |
openshift-apiserver-operator |
default-scheduler |
openshift-apiserver-operator-7bf7f6b755-sh6qf |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-operator-6bc8656fdc-2q7f5 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-service-ca-operator |
default-scheduler |
service-ca-operator-77758bc754-8smqn |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-dns-operator |
replicaset-controller |
dns-operator-7c56cf9b74 |
SuccessfulCreate |
Created pod: dns-operator-7c56cf9b74-xz27r | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-85cff47f46-4gv5j |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-authentication-operator |
default-scheduler |
authentication-operator-6c968fdfdf-nrrfw |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-etcd-operator |
default-scheduler |
etcd-operator-5bf4d88c6f-2bpmr |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-scheduler-operator |
default-scheduler |
openshift-kube-scheduler-operator-5f85974995-g4rwv |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-77dfcc565f |
SuccessfulCreate |
Created pod: cluster-version-operator-77dfcc565f-nqpsd | |
openshift-authentication-operator |
replicaset-controller |
authentication-operator-6c968fdfdf |
SuccessfulCreate |
Created pod: authentication-operator-6c968fdfdf-nrrfw | |
openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-5f85974995 |
SuccessfulCreate |
Created pod: openshift-kube-scheduler-operator-5f85974995-g4rwv | |
openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-6fb9f88b7 |
SuccessfulCreate |
Created pod: cluster-image-registry-operator-6fb9f88b7-tgvfl | |
openshift-config-operator |
default-scheduler |
openshift-config-operator-68758cbcdb-zh8g5 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-controller-manager-operator |
default-scheduler |
kube-controller-manager-operator-848f645654-7hmhg |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-config-operator |
replicaset-controller |
openshift-config-operator-68758cbcdb |
SuccessfulCreate |
Created pod: openshift-config-operator-68758cbcdb-zh8g5 | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-7ff994598c-p82nn |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-image-registry |
default-scheduler |
cluster-image-registry-operator-6fb9f88b7-tgvfl |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-version |
default-scheduler |
cluster-version-operator-77dfcc565f-nqpsd |
Scheduled |
Successfully assigned openshift-cluster-version/cluster-version-operator-77dfcc565f-nqpsd to master-0 | |
openshift-controller-manager-operator |
default-scheduler |
openshift-controller-manager-operator-6c8676f99d-7z948 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-etcd-operator |
replicaset-controller |
etcd-operator-5bf4d88c6f |
SuccessfulCreate |
Created pod: etcd-operator-5bf4d88c6f-2bpmr | |
openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-848f645654 |
SuccessfulCreate |
Created pod: kube-controller-manager-operator-848f645654-7hmhg | |
openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-6c8676f99d |
SuccessfulCreate |
Created pod: openshift-controller-manager-operator-6c8676f99d-7z948 | |
openshift-kube-apiserver-operator |
default-scheduler |
kube-apiserver-operator-765d9ff747-gr68k |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-765d9ff747 |
SuccessfulCreate |
Created pod: kube-apiserver-operator-765d9ff747-gr68k | |
openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-7ff994598c |
SuccessfulCreate |
Created pod: cluster-monitoring-operator-7ff994598c-p82nn | |
openshift-marketplace |
default-scheduler |
marketplace-operator-f797b99b6-hjjrk |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-marketplace |
replicaset-controller |
marketplace-operator-f797b99b6 |
SuccessfulCreate |
Created pod: marketplace-operator-f797b99b6-hjjrk | |
openshift-network-operator |
replicaset-controller |
network-operator-79767b7ff9 |
SuccessfulCreate |
Created pod: network-operator-79767b7ff9-5bgzx | |
openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-fbc6455c4 |
SuccessfulCreate |
Created pod: catalog-operator-fbc6455c4-5m8ll | |
openshift-operator-lifecycle-manager |
default-scheduler |
package-server-manager-67477646d4-7hndf |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
default-scheduler |
catalog-operator-fbc6455c4-5m8ll |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-ingress-operator |
replicaset-controller |
ingress-operator-8649c48786 |
SuccessfulCreate |
Created pod: ingress-operator-8649c48786-cx2b2 | |
openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-b9c5dfc78 |
SuccessfulCreate |
Created pod: kube-storage-version-migrator-operator-b9c5dfc78-dcxkw | |
openshift-operator-lifecycle-manager |
default-scheduler |
olm-operator-7cd7dbb44c-vzj4q |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-7cd7dbb44c |
SuccessfulCreate |
Created pod: olm-operator-7cd7dbb44c-vzj4q | |
openshift-kube-storage-version-migrator-operator |
default-scheduler |
kube-storage-version-migrator-operator-b9c5dfc78-dcxkw |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-ingress-operator |
default-scheduler |
ingress-operator-8649c48786-cx2b2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-network-operator |
default-scheduler |
network-operator-79767b7ff9-5bgzx |
Scheduled |
Successfully assigned openshift-network-operator/network-operator-79767b7ff9-5bgzx to master-0 | |
openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-67477646d4 |
SuccessfulCreate |
Created pod: package-server-manager-67477646d4-7hndf | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
assisted-installer |
default-scheduler |
assisted-installer-controller-lj5rn |
Scheduled |
Successfully assigned assisted-installer/assisted-installer-controller-lj5rn to master-0 | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(3169f44496ed8a28c6d6a15511ab0eec) |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Created |
Created container: kube-rbac-proxy-crio |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Started |
Started container kube-rbac-proxy-crio |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine |
openshift-network-operator |
kubelet |
network-operator-79767b7ff9-5bgzx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9724d2036305cbd729e1f484c5bad89971de977fff8a6723fef1873858dd1123" | |
assisted-installer |
kubelet |
assisted-installer-controller-lj5rn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb3ec61f9a932a9ad13bdeb44bcf9477a8d5f728151d7f19ed3ef7d4b02b3a82" | |
openshift-network-operator |
kubelet |
network-operator-79767b7ff9-5bgzx |
Started |
Started container network-operator | |
openshift-network-operator |
kubelet |
network-operator-79767b7ff9-5bgzx |
Created |
Created container: network-operator | |
openshift-network-operator |
kubelet |
network-operator-79767b7ff9-5bgzx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9724d2036305cbd729e1f484c5bad89971de977fff8a6723fef1873858dd1123" in 3.587s (3.587s including waiting). Image size: 616108962 bytes. | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_e639e7e1-dbaa-49f1-9b57-8103de8d49ce became leader | |
assisted-installer |
kubelet |
assisted-installer-controller-lj5rn |
Created |
Created container: assisted-installer-controller | |
assisted-installer |
kubelet |
assisted-installer-controller-lj5rn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb3ec61f9a932a9ad13bdeb44bcf9477a8d5f728151d7f19ed3ef7d4b02b3a82" in 5.505s (5.505s including waiting). Image size: 682371258 bytes. | |
assisted-installer |
kubelet |
assisted-installer-controller-lj5rn |
Started |
Started container assisted-installer-controller | |
openshift-network-operator |
default-scheduler |
mtu-prober-4tf99 |
Scheduled |
Successfully assigned openshift-network-operator/mtu-prober-4tf99 to master-0 | |
openshift-network-operator |
kubelet |
mtu-prober-4tf99 |
Started |
Started container prober | |
openshift-network-operator |
kubelet |
mtu-prober-4tf99 |
Created |
Created container: prober | |
openshift-network-operator |
kubelet |
mtu-prober-4tf99 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9724d2036305cbd729e1f484c5bad89971de977fff8a6723fef1873858dd1123" already present on machine | |
openshift-network-operator |
job-controller |
mtu-prober |
SuccessfulCreate |
Created pod: mtu-prober-4tf99 | |
assisted-installer |
job-controller |
assisted-installer-controller |
Completed |
Job completed | |
openshift-network-operator |
job-controller |
mtu-prober |
Completed |
Job completed | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-multus namespace | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-twxbl | |
openshift-multus |
default-scheduler |
multus-twxbl |
Scheduled |
Successfully assigned openshift-multus/multus-twxbl to master-0 | |
openshift-multus |
kubelet |
multus-twxbl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9014f384de5f9a0b7418d5869ad349abb9588d16bd09ed650a163c045315dbff" | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-6pk59 | |
openshift-multus |
default-scheduler |
multus-additional-cni-plugins-6pk59 |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-6pk59 to master-0 | |
openshift-multus |
default-scheduler |
network-metrics-daemon-gp57t |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-gp57t to master-0 | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-gp57t | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-7dfc5b745f to 1 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfde59e48cd5dee3721f34d249cb119cc3259fd857965d34f9c7ed83b0c363a1" | |
openshift-multus |
default-scheduler |
multus-admission-controller-7dfc5b745f-258xq |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-multus |
replicaset-controller |
multus-admission-controller-7dfc5b745f |
SuccessfulCreate |
Created pod: multus-admission-controller-7dfc5b745f-258xq | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ovn-kubernetes namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Created |
Created container: egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfde59e48cd5dee3721f34d249cb119cc3259fd857965d34f9c7ed83b0c363a1" in 3.121s (3.121s including waiting). Image size: 532402162 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:916566bb9d0143352324233d460ad94697719c11c8c9158e3aea8f475941751f" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-host-network namespace | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-9xxhr | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-control-plane-5df5548d54-jjfhq |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-5df5548d54-jjfhq to master-0 | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-9xxhr |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-9xxhr to master-0 | |
openshift-ovn-kubernetes |
deployment-controller |
ovnkube-control-plane |
ScalingReplicaSet |
Scaled up replica set ovnkube-control-plane-5df5548d54 to 1 | |
openshift-ovn-kubernetes |
replicaset-controller |
ovnkube-control-plane-5df5548d54 |
SuccessfulCreate |
Created pod: ovnkube-control-plane-5df5548d54-jjfhq | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-diagnostics namespace | |
openshift-multus |
kubelet |
multus-twxbl |
Created |
Created container: kube-multus | |
openshift-network-diagnostics |
default-scheduler |
network-check-source-85d8db45d4-5bjlq |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5548d54-jjfhq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" | |
openshift-multus |
kubelet |
multus-twxbl |
Started |
Started container kube-multus | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:916566bb9d0143352324233d460ad94697719c11c8c9158e3aea8f475941751f" in 7.936s (7.936s including waiting). Image size: 677523572 bytes. | |
openshift-network-diagnostics |
replicaset-controller |
network-check-source-85d8db45d4 |
SuccessfulCreate |
Created pod: network-check-source-85d8db45d4-5bjlq | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Created |
Created container: cni-plugins | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5548d54-jjfhq |
Started |
Started container kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5548d54-jjfhq |
Created |
Created container: kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5548d54-jjfhq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-network-diagnostics |
deployment-controller |
network-check-source |
ScalingReplicaSet |
Scaled up replica set network-check-source-85d8db45d4 to 1 | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" | |
openshift-multus |
kubelet |
multus-twxbl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9014f384de5f9a0b7418d5869ad349abb9588d16bd09ed650a163c045315dbff" in 12.323s (12.323s including waiting). Image size: 1232140918 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a3d37aa7a22c68afa963ecfb4b43c52cccf152580cd66e4d5382fb69e4037cc" | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-w2lss | |
openshift-network-diagnostics |
default-scheduler |
network-check-target-w2lss |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-w2lss to master-0 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Started |
Started container cni-plugins | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-node-identity namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9432c13d76bd4ba4eb9197c050cf88c0d701fa2055eeb59257e2e23901f9fdff" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Created |
Created container: bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a3d37aa7a22c68afa963ecfb4b43c52cccf152580cd66e4d5382fb69e4037cc" in 1.621s (1.621s including waiting). Image size: 406053031 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Started |
Started container bond-cni-plugin | |
openshift-network-node-identity |
daemonset-controller |
network-node-identity |
SuccessfulCreate |
Created pod: network-node-identity-f8hvq | |
openshift-network-node-identity |
kubelet |
network-node-identity-f8hvq |
FailedMount |
MountVolume.SetUp failed for volume "webhook-cert" : secret "network-node-identity-cert" not found | |
openshift-network-node-identity |
default-scheduler |
network-node-identity-f8hvq |
Scheduled |
Successfully assigned openshift-network-node-identity/network-node-identity-f8hvq to master-0 | |
openshift-network-node-identity |
kubelet |
network-node-identity-f8hvq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9432c13d76bd4ba4eb9197c050cf88c0d701fa2055eeb59257e2e23901f9fdff" in 1.126s (1.126s including waiting). Image size: 401810450 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:631a3798b749fecc041a99929eb946618df723e15055e805ff752a1a1273481c" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Started |
Started container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Created |
Created container: routeoverride-cni | |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-gp57t |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" in 22.181s (22.181s including waiting). Image size: 1631758507 bytes. | |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-gp57t |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5548d54-jjfhq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" in 22.02s (22.02s including waiting). Image size: 1631758507 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Created |
Created container: kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5548d54-jjfhq |
Created |
Created container: ovnkube-cluster-manager | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Created |
Created container: whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:631a3798b749fecc041a99929eb946618df723e15055e805ff752a1a1273481c" in 17.313s (17.313s including waiting). Image size: 870567329 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5548d54-jjfhq |
Started |
Started container ovnkube-cluster-manager | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-5df5548d54-jjfhq became leader | |
openshift-network-node-identity |
master-0_a89af2fb-93a6-4f42-8776-fade096642f3 |
ovnkube-identity |
LeaderElection |
master-0_a89af2fb-93a6-4f42-8776-fade096642f3 became leader | |
openshift-network-node-identity |
kubelet |
network-node-identity-f8hvq |
Started |
Started container approver | |
openshift-network-node-identity |
kubelet |
network-node-identity-f8hvq |
Created |
Created container: approver | |
openshift-network-node-identity |
kubelet |
network-node-identity-f8hvq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-f8hvq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" in 17.804s (17.804s including waiting). Image size: 1631758507 bytes. | |
openshift-network-node-identity |
kubelet |
network-node-identity-f8hvq |
Created |
Created container: webhook | |
openshift-network-node-identity |
kubelet |
network-node-identity-f8hvq |
Started |
Started container webhook | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Created |
Created container: ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Created |
Created container: ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Started |
Started container whereabouts-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Created |
Created container: nbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Created |
Created container: whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:631a3798b749fecc041a99929eb946618df723e15055e805ff752a1a1273481c" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Created |
Created container: northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-6pk59 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9014f384de5f9a0b7418d5869ad349abb9588d16bd09ed650a163c045315dbff" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Created |
Created container: sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-9xxhr |
Started |
Started container sbdb | |
default |
ovnkube-csr-approver-controller |
csr-cz8bh |
CSRApproved |
CSR "csr-cz8bh" has been approved | |
default |
ovnk-controlplane |
master-0 |
ErrorAddingResource |
[k8s.ovn.org/node-chassis-id annotation not found for node master-0, error getting gateway config for node master-0: k8s.ovn.org/l3-gateway-config annotation not found for node "master-0", failed to update chassis to local for local node master-0, error: failed to parse node chassis-id for node - master-0, error: k8s.ovn.org/node-chassis-id annotation not found for node master-0] | |
| (x7) | openshift-network-diagnostics |
kubelet |
network-check-target-w2lss |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-tbt5w" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulDelete |
Deleted pod: ovnkube-node-9xxhr | |
| (x8) | openshift-cluster-version |
kubelet |
cluster-version-operator-77dfcc565f-nqpsd |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found |
| (x18) | openshift-network-diagnostics |
kubelet |
network-check-target-w2lss |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
default |
ovnkube-csr-approver-controller |
csr-qfbn7 |
CSRApproved |
CSR "csr-qfbn7" has been approved | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-z4qvm | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-z4qvm |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-z4qvm to master-0 | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Created |
Created container: kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Created |
Created container: northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Created |
Created container: nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Created |
Created container: ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Created |
Created container: ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-z4qvm |
Created |
Created container: sbdb | |
openshift-image-registry |
default-scheduler |
cluster-image-registry-operator-6fb9f88b7-tgvfl |
Scheduled |
Successfully assigned openshift-image-registry/cluster-image-registry-operator-6fb9f88b7-tgvfl to master-0 | |
openshift-config-operator |
default-scheduler |
openshift-config-operator-68758cbcdb-zh8g5 |
Scheduled |
Successfully assigned openshift-config-operator/openshift-config-operator-68758cbcdb-zh8g5 to master-0 | |
openshift-kube-storage-version-migrator-operator |
default-scheduler |
kube-storage-version-migrator-operator-b9c5dfc78-dcxkw |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b9c5dfc78-dcxkw to master-0 | |
openshift-kube-apiserver-operator |
default-scheduler |
kube-apiserver-operator-765d9ff747-gr68k |
Scheduled |
Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-765d9ff747-gr68k to master-0 | |
openshift-dns-operator |
default-scheduler |
dns-operator-7c56cf9b74-xz27r |
Scheduled |
Successfully assigned openshift-dns-operator/dns-operator-7c56cf9b74-xz27r to master-0 | |
openshift-authentication-operator |
default-scheduler |
authentication-operator-6c968fdfdf-nrrfw |
Scheduled |
Successfully assigned openshift-authentication-operator/authentication-operator-6c968fdfdf-nrrfw to master-0 | |
openshift-marketplace |
default-scheduler |
marketplace-operator-f797b99b6-hjjrk |
Scheduled |
Successfully assigned openshift-marketplace/marketplace-operator-f797b99b6-hjjrk to master-0 | |
openshift-cluster-olm-operator |
default-scheduler |
cluster-olm-operator-56fcb6cc5f-4xwp2 |
Scheduled |
Successfully assigned openshift-cluster-olm-operator/cluster-olm-operator-56fcb6cc5f-4xwp2 to master-0 | |
openshift-operator-lifecycle-manager |
default-scheduler |
olm-operator-7cd7dbb44c-vzj4q |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/olm-operator-7cd7dbb44c-vzj4q to master-0 | |
openshift-operator-lifecycle-manager |
default-scheduler |
package-server-manager-67477646d4-7hndf |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-67477646d4-7hndf to master-0 | |
openshift-kube-controller-manager-operator |
default-scheduler |
kube-controller-manager-operator-848f645654-7hmhg |
Scheduled |
Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-848f645654-7hmhg to master-0 | |
openshift-operator-lifecycle-manager |
default-scheduler |
catalog-operator-fbc6455c4-5m8ll |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-fbc6455c4-5m8ll to master-0 | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-85cff47f46-4gv5j |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-85cff47f46-4gv5j to master-0 | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-operator-6bc8656fdc-2q7f5 |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-6bc8656fdc-2q7f5 to master-0 | |
openshift-ingress-operator |
default-scheduler |
ingress-operator-8649c48786-cx2b2 |
Scheduled |
Successfully assigned openshift-ingress-operator/ingress-operator-8649c48786-cx2b2 to master-0 | |
openshift-apiserver-operator |
default-scheduler |
openshift-apiserver-operator-7bf7f6b755-sh6qf |
Scheduled |
Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-7bf7f6b755-sh6qf to master-0 | |
openshift-service-ca-operator |
default-scheduler |
service-ca-operator-77758bc754-8smqn |
Scheduled |
Successfully assigned openshift-service-ca-operator/service-ca-operator-77758bc754-8smqn to master-0 | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-7ff994598c-p82nn |
Scheduled |
Successfully assigned openshift-monitoring/cluster-monitoring-operator-7ff994598c-p82nn to master-0 | |
openshift-multus |
default-scheduler |
multus-admission-controller-7dfc5b745f-258xq |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-7dfc5b745f-258xq to master-0 | |
openshift-kube-scheduler-operator |
default-scheduler |
openshift-kube-scheduler-operator-5f85974995-g4rwv |
Scheduled |
Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f85974995-g4rwv to master-0 | |
openshift-etcd-operator |
default-scheduler |
etcd-operator-5bf4d88c6f-2bpmr |
Scheduled |
Successfully assigned openshift-etcd-operator/etcd-operator-5bf4d88c6f-2bpmr to master-0 | |
openshift-controller-manager-operator |
default-scheduler |
openshift-controller-manager-operator-6c8676f99d-7z948 |
Scheduled |
Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-6c8676f99d-7z948 to master-0 | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-bkmlp | |
openshift-network-operator |
default-scheduler |
iptables-alerter-bkmlp |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-bkmlp to master-0 | |
openshift-network-operator |
kubelet |
iptables-alerter-bkmlp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:79f99fd6cce984287932edf0d009660bb488d663081f3d62ec3b23bc8bfbf6c2" | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-b9c5dfc78-dcxkw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:75d996f6147edb88c09fd1a052099de66638590d7d03a735006244bc9e19f898" | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-848f645654-7hmhg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-operator-6bc8656fdc-2q7f5 |
AddedInterface |
Add eth0 [10.128.0.6/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-56fcb6cc5f-4xwp2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f0aa9cd04713acc5c6fea721bd849e1500da8ae945e0b32000887f34d786e0b" | |
openshift-cluster-olm-operator |
multus |
cluster-olm-operator-56fcb6cc5f-4xwp2 |
AddedInterface |
Add eth0 [10.128.0.18/23] from ovn-kubernetes | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-77758bc754-8smqn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8139ed65c0a0a4b0f253b715c11cc52be027efe8a4774da9ccce35c78ef439da" | |
openshift-kube-controller-manager-operator |
multus |
kube-controller-manager-operator-848f645654-7hmhg |
AddedInterface |
Add eth0 [10.128.0.16/23] from ovn-kubernetes | |
openshift-config-operator |
kubelet |
openshift-config-operator-68758cbcdb-zh8g5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b00c658332d6c6786bd969b26097c20a78c79c045f1692a8809234f5fb586c22" | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-7bf7f6b755-sh6qf |
Failed |
Error: ErrImagePull | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-7bf7f6b755-sh6qf |
Failed |
Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8375671da86aa527ee7e291d86971b0baa823ffc7663b5a983084456e76c0f59": pull QPS exceeded | |
openshift-etcd-operator |
kubelet |
etcd-operator-5bf4d88c6f-2bpmr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f4d4282cb53325e737ad68abbfcb70687ae04fb50353f4f0ba0ba5703b15009a" | |
openshift-etcd-operator |
multus |
etcd-operator-5bf4d88c6f-2bpmr |
AddedInterface |
Add eth0 [10.128.0.21/23] from ovn-kubernetes | |
openshift-config-operator |
multus |
openshift-config-operator-68758cbcdb-zh8g5 |
AddedInterface |
Add eth0 [10.128.0.10/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
multus |
kube-apiserver-operator-765d9ff747-gr68k |
AddedInterface |
Add eth0 [10.128.0.25/23] from ovn-kubernetes | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-7bf7f6b755-sh6qf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8375671da86aa527ee7e291d86971b0baa823ffc7663b5a983084456e76c0f59" | |
openshift-service-ca-operator |
multus |
service-ca-operator-77758bc754-8smqn |
AddedInterface |
Add eth0 [10.128.0.19/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-765d9ff747-gr68k |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-kube-scheduler-operator |
multus |
openshift-kube-scheduler-operator-5f85974995-g4rwv |
AddedInterface |
Add eth0 [10.128.0.23/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f85974995-g4rwv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" | |
openshift-controller-manager-operator |
multus |
openshift-controller-manager-operator-6c8676f99d-7z948 |
AddedInterface |
Add eth0 [10.128.0.8/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-6bc8656fdc-2q7f5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10e57ca7611f79710f05777dc6a8f31c7e04eb09da4d8d793a5acfbf0e4692d7" | |
openshift-apiserver-operator |
multus |
openshift-apiserver-operator-7bf7f6b755-sh6qf |
AddedInterface |
Add eth0 [10.128.0.5/23] from ovn-kubernetes | |
openshift-authentication-operator |
kubelet |
authentication-operator-6c968fdfdf-nrrfw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e85850a4ae1a1e3ec2c590a4936d640882b6550124da22031c85b526afbf52df" | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-6c8676f99d-7z948 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8eabac819f289e29d75c7ab172d8124554849a47f0b00770928c3eb19a5a31c4" | |
openshift-authentication-operator |
multus |
authentication-operator-6c968fdfdf-nrrfw |
AddedInterface |
Add eth0 [10.128.0.11/23] from ovn-kubernetes | |
openshift-kube-storage-version-migrator-operator |
multus |
kube-storage-version-migrator-operator-b9c5dfc78-dcxkw |
AddedInterface |
Add eth0 [10.128.0.26/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-765d9ff747-gr68k |
Created |
Created container: kube-apiserver-operator | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-765d9ff747-gr68k |
Started |
Started container kube-apiserver-operator | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.18.29" | |
| (x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-7bf7f6b755-sh6qf |
Failed |
Error: ImagePullBackOff |
openshift-kube-apiserver-operator |
kube-apiserver-operator-high-cpu-usage-alert-controller-highcpuusagealertcontroller |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/cpu-utilization -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-serviceaccountissuercontroller |
kube-apiserver-operator |
ServiceAccountIssuer |
Issuer set to default value "https://kubernetes.default.svc" | |
| (x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-7bf7f6b755-sh6qf |
BackOff |
Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8375671da86aa527ee7e291d86971b0baa823ffc7663b5a983084456e76c0f59" |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-765d9ff747-gr68k_132d7dc9-c35a-4460-890e-6e7241927e67 became leader | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.29"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Upgradeable changed from Unknown to True ("All is well"),EvaluationConditionsDetected changed from Unknown to False ("All is well") | |
| (x5) | openshift-multus |
kubelet |
multus-admission-controller-7dfc5b745f-258xq |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,StreamingCollectionEncodingToJSON=false,StreamingCollectionEncodingToProtobuf=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379,https://localhost:2379 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" | |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-85cff47f46-4gv5j |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from Unknown to False ("All is well") | |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-67477646d4-7hndf |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found |
| (x5) | openshift-ingress-operator |
kubelet |
ingress-operator-8649c48786-cx2b2 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" | |
| (x8) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMissing |
no observedConfig |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "admission": map[string]any{ +Â "pluginConfig": map[string]any{ +Â "PodSecurity": map[string]any{"configuration": map[string]any{...}}, +Â "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, +Â "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, +Â }, +Â }, +Â "apiServerArguments": map[string]any{ +Â "api-audiences": []any{string("https://kubernetes.default.svc")}, +Â "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, +Â "feature-gates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +Â }, +Â "goaway-chance": []any{string("0")}, +Â "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, +Â "send-retry-after-while-not-ready-once": []any{string("true")}, +Â "service-account-issuer": []any{string("https://kubernetes.default.svc")}, +Â "service-account-jwks-uri": []any{string("https://api.sno.openstack.lab:6443/openid/v1/jwks")}, +Â "shutdown-delay-duration": []any{string("0s")}, +Â }, +Â "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, +Â "gracefulTerminationDuration": string("15"), +Â "servicesSubnet": string("172.30.0.0/16"), +Â "servingInfo": map[string]any{ +Â "bindAddress": string("0.0.0.0:6443"), +Â "bindNetwork": string("tcp4"), +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â "namedCertificates": []any{ +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-resou"...), +Â "keyFile": string("/etc/kubernetes/static-pod-resou"...), +Â }, +Â }, +Â }, Â Â } | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/kube-control-plane-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodeObserved |
Observed new master node master-0 | |
| (x5) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-6fb9f88b7-tgvfl |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Upgradeable message changed from "All is well" to "KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced." | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready") | |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-7cd7dbb44c-vzj4q |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-85cff47f46-4gv5j |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
| (x5) | openshift-marketplace |
kubelet |
marketplace-operator-f797b99b6-hjjrk |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-fbc6455c4-5m8ll |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
| (x5) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-7ff994598c-p82nn |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
| (x5) | openshift-dns-operator |
kubelet |
dns-operator-7c56cf9b74-xz27r |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/aggregator-client-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-config-operator |
kubelet |
openshift-config-operator-68758cbcdb-zh8g5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b00c658332d6c6786bd969b26097c20a78c79c045f1692a8809234f5fb586c22" in 6.788s (6.788s including waiting). Image size: 433122306 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/: configmaps "loadbalancer-serving-ca" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-b9c5dfc78-dcxkw |
Started |
Started container kube-storage-version-migrator-operator | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-56fcb6cc5f-4xwp2 |
Created |
Created container: copy-catalogd-manifests | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-b9c5dfc78-dcxkw |
Created |
Created container: kube-storage-version-migrator-operator | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-6c8676f99d-7z948 |
Started |
Started container openshift-controller-manager-operator | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-6c8676f99d-7z948 |
Created |
Created container: openshift-controller-manager-operator | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-6c8676f99d-7z948 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8eabac819f289e29d75c7ab172d8124554849a47f0b00770928c3eb19a5a31c4" in 8.757s (8.757s including waiting). Image size: 502436444 bytes. | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-848f645654-7hmhg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" in 8.753s (8.753s including waiting). Image size: 503340749 bytes. | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-848f645654-7hmhg |
Created |
Created container: kube-controller-manager-operator | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-848f645654-7hmhg |
Started |
Started container kube-controller-manager-operator | |
openshift-authentication-operator |
kubelet |
authentication-operator-6c968fdfdf-nrrfw |
Started |
Started container authentication-operator | |
openshift-authentication-operator |
kubelet |
authentication-operator-6c968fdfdf-nrrfw |
Created |
Created container: authentication-operator | |
openshift-authentication-operator |
kubelet |
authentication-operator-6c968fdfdf-nrrfw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e85850a4ae1a1e3ec2c590a4936d640882b6550124da22031c85b526afbf52df" in 8.745s (8.745s including waiting). Image size: 507687221 bytes. | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-77758bc754-8smqn |
Started |
Started container service-ca-operator | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-56fcb6cc5f-4xwp2 |
Started |
Started container copy-catalogd-manifests | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-b9c5dfc78-dcxkw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:75d996f6147edb88c09fd1a052099de66638590d7d03a735006244bc9e19f898" in 8.755s (8.755s including waiting). Image size: 499082775 bytes. | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-56fcb6cc5f-4xwp2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f0aa9cd04713acc5c6fea721bd849e1500da8ae945e0b32000887f34d786e0b" in 8.758s (8.758s including waiting). Image size: 442509555 bytes. | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-77758bc754-8smqn |
Created |
Created container: service-ca-operator | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-77758bc754-8smqn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8139ed65c0a0a4b0f253b715c11cc52be027efe8a4774da9ccce35c78ef439da" in 8.747s (8.747s including waiting). Image size: 503011144 bytes. | |
openshift-etcd-operator |
kubelet |
etcd-operator-5bf4d88c6f-2bpmr |
Started |
Started container etcd-operator | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-etcd-operator |
kubelet |
etcd-operator-5bf4d88c6f-2bpmr |
Created |
Created container: etcd-operator | |
openshift-network-operator |
kubelet |
iptables-alerter-bkmlp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:79f99fd6cce984287932edf0d009660bb488d663081f3d62ec3b23bc8bfbf6c2" in 9.313s (9.313s including waiting). Image size: 576619763 bytes. | |
openshift-etcd-operator |
kubelet |
etcd-operator-5bf4d88c6f-2bpmr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f4d4282cb53325e737ad68abbfcb70687ae04fb50353f4f0ba0ba5703b15009a" in 8.748s (8.748s including waiting). Image size: 512838054 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f85974995-g4rwv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" in 8.751s (8.751s including waiting). Image size: 500848684 bytes. | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f85974995-g4rwv |
Created |
Created container: kube-scheduler-operator-container | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f85974995-g4rwv |
Started |
Started container kube-scheduler-operator-container | |
openshift-config-operator |
kubelet |
openshift-config-operator-68758cbcdb-zh8g5 |
Started |
Started container openshift-api | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-6bc8656fdc-2q7f5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10e57ca7611f79710f05777dc6a8f31c7e04eb09da4d8d793a5acfbf0e4692d7" in 8.76s (8.76s including waiting). Image size: 500943492 bytes. | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-6bc8656fdc-2q7f5 |
Created |
Created container: csi-snapshot-controller-operator | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-6bc8656fdc-2q7f5 |
Started |
Started container csi-snapshot-controller-operator | |
openshift-config-operator |
kubelet |
openshift-config-operator-68758cbcdb-zh8g5 |
Created |
Created container: openshift-api | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: secret doesn't exist | |
openshift-config-operator |
kubelet |
openshift-config-operator-68758cbcdb-zh8g5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3b8d91a25eeb9f02041e947adb3487da3e7ab8449d3d2ad015827e7954df7b34" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-56fcb6cc5f-4xwp2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f952cec1e5332b84bdffa249cd426f39087058d6544ddcec650a414c15a9b68" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
ServiceAccountCreated |
Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded set to False ("WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}] | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-848f645654-7hmhg_de738ab7-0991-47f6-ae87-77abd1e92b90 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-6bc8656fdc-2q7f5_7505b1e0-30ac-4677-9b8d-40f784ef58db became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodeObserved |
Observed new master node master-0 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-5bf4d88c6f-2bpmr_4e42e5af-18d1-425c-a75d-71c398ba6625 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" to "All is well" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from Unknown to False ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.29"}] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.18.29" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-6b958b6f94 to 1 | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-6b958b6f94 |
SuccessfulCreate |
Created pod: csi-snapshot-controller-6b958b6f94-w74zr | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-5f85974995-g4rwv_6fa38f67-eb24-4cf8-ba06-b91d44ab680d became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-b9c5dfc78-dcxkw_cc31d973-3fed-4fc6-8393-8c6327f549b5 became leader | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-6b958b6f94-w74zr |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-6b958b6f94-w74zr to master-0 | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller |
csi-snapshot-controller-operator |
DeploymentCreated |
Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-6c968fdfdf-nrrfw_6be0e687-3ba6-4e4f-8a60-d49573d7192f became leader | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator |
kube-storage-version-migrator-operator |
DeploymentCreated |
Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.29"}] | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorVersionChanged |
clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.18.29" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-storage-version-migrator |
default-scheduler |
migrator-74b7b57c65-d6v67 |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator/migrator-74b7b57c65-d6v67 to master-0 | |
openshift-kube-storage-version-migrator |
replicaset-controller |
migrator-74b7b57c65 |
SuccessfulCreate |
Created pod: migrator-74b7b57c65-d6v67 | |
openshift-kube-storage-version-migrator |
deployment-controller |
migrator |
ScalingReplicaSet |
Scaled up replica set migrator-74b7b57c65 to 1 | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
NamespaceCreated |
Created Namespace/openshift-kube-storage-version-migrator because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-77758bc754-8smqn_ca160ab2-9540-4597-9fbf-da935392beb5 became leader | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}] | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator namespace | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.18.29"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.18.29" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, } | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-6c8676f99d-7z948_f13ff4b0-c7ac-42bd-ab3b-8b2d428a1859 became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to BuildCSIVolumes=true | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "build": map[string]any{ + "buildDefaults": map[string]any{"resources": map[string]any{}}, + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:31aa3c7464"...), + }, + }, + "controllers": []any{ + string("openshift.io/build"), string("openshift.io/build-config-change"), + string("openshift.io/builder-rolebindings"), + string("openshift.io/builder-serviceaccount"), + string("-openshift.io/default-rolebindings"), string("openshift.io/deployer"), + string("openshift.io/deployer-rolebindings"), + string("openshift.io/deployer-serviceaccount"), + string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), + string("openshift.io/image-puller-rolebindings"), + string("openshift.io/image-signature-import"), + string("openshift.io/image-trigger"), string("openshift.io/ingress-ip"), + string("openshift.io/ingress-to-route"), + string("openshift.io/origin-namespace"), ..., + }, + "deployer": map[string]any{ + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:42c3f5030d"...), + }, + }, + "featureGates": []any{string("BuildCSIVolumes=true")}, + "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, } | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("All is well"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/controller-manager -n openshift-controller-manager because it was missing | |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "controlPlane": map[string]any{"replicas": float64(1)}, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, } | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ServiceAccountCreated |
Created ServiceAccount/service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-59948648c9 to 1 | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
NamespaceCreated |
Created Namespace/openshift-service-ca because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Upgradeable changed from Unknown to True ("All is well") | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-route-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca namespace | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded set to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.18.29"}] | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well") | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorVersionChanged |
clusteroperator/etcd version "raw-internal" changed from "" to "4.18.29" |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from Unknown to False ("All is well"),Available changed from Unknown to False ("OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:deployer because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.29"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_ExternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-ExternalLoadBalancerServing-certrotationcontroller |
kube-apiserver-operator |
RotationError |
configmaps "loadbalancer-serving-ca" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/: configmaps "kube-control-plane-signer-ca" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:deployer because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "operator" changed from "" to "4.18.29" |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("All is well") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "All is well" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]",Progressing changed from Unknown to False ("NodeInstallerProgressing: 1 node is at revision 0"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0") | |
| (x7) | openshift-controller-manager |
replicaset-controller |
controller-manager-59948648c9 |
FailedCreate |
Error creating: pods "controller-manager-59948648c9-" is forbidden: error looking up service account openshift-controller-manager/openshift-controller-manager-sa: serviceaccount "openshift-controller-manager-sa" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "extendedArguments": map[string]any{ + "cluster-cidr": []any{string("10.128.0.0/16")}, + "cluster-name": []any{string("sno-d7v9r")}, + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + "service-cluster-ip-range": []any{string("172.30.0.0/16")}, + }, + "featureGates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), + string("DisableKubeletCloudCredentialProviders=true"), + string("GCPLabelsTags=true"), string("HardwareSpeed=true"), + string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), + string("ManagedBootImages=true"), string("ManagedBootImagesAWS=true"), + string("MultiArchInstallAWS=true"), ..., + }, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, } | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,StreamingCollectionEncodingToJSON=false,StreamingCollectionEncodingToProtobuf=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated extendedArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,StreamingCollectionEncodingToJSON=false,StreamingCollectionEncodingToProtobuf=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "All is well" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodeObserved |
Observed new master node master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
CABundleUpdateRequired |
"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert: configmap doesn't exist | |
openshift-controller-manager |
default-scheduler |
controller-manager-59948648c9-krt4p |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-59948648c9-krt4p to master-0 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-59948648c9 |
SuccessfulCreate |
Created pod: controller-manager-59948648c9-krt4p | |
default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory | |
default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure | |
default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
SecretCreated |
Created Secret/signing-key -n openshift-service-ca because it was missing | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-864c894b4d to 1 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-864c894b4d |
SuccessfulCreate |
Created pod: route-controller-manager-864c894b4d-xbpkz | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-864c894b4d-xbpkz |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-864c894b4d-xbpkz to master-0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-network-operator |
kubelet |
iptables-alerter-bkmlp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:79f99fd6cce984287932edf0d009660bb488d663081f3d62ec3b23bc8bfbf6c2" already present on machine | |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-864c894b4d-xbpkz |
FailedMount |
MountVolume.SetUp failed for volume "config" : configmap "config" not found |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodeObserved |
Observed new master node master-0 |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-controller-manager because it was missing | |
openshift-service-ca-operator |
service-ca-operator-resource-sync-controller-resourcesynccontroller |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-config-managed because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
DeploymentCreated |
Created Deployment.apps/service-ca -n openshift-service-ca because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-scheduler because it changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-env-var-controller |
etcd-operator |
EnvVarControllerUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
etcd-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metric-serving-ca -n openshift-etcd-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_ExternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_ExternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-59948648c9-krt4p |
FailedMount |
MountVolume.SetUp failed for volume "config" : configmap "config" not found |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-56fcb6cc5f-4xwp2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f952cec1e5332b84bdffa249cd426f39087058d6544ddcec650a414c15a9b68" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-7bf7f6b755-sh6qf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8375671da86aa527ee7e291d86971b0baa823ffc7663b5a983084456e76c0f59" | |
openshift-service-ca |
deployment-controller |
service-ca |
ScalingReplicaSet |
Scaled up replica set service-ca-77c99c46b8 to 1 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-config because it was missing | |
openshift-service-ca |
replicaset-controller |
service-ca-77c99c46b8 |
SuccessfulCreate |
Created pod: service-ca-77c99c46b8-6cntk | |
openshift-service-ca |
default-scheduler |
service-ca-77c99c46b8-6cntk |
Scheduled |
Successfully assigned openshift-service-ca/service-ca-77c99c46b8-6cntk to master-0 | |
openshift-config-operator |
kubelet |
openshift-config-operator-68758cbcdb-zh8g5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3b8d91a25eeb9f02041e947adb3487da3e7ab8449d3d2ad015827e7954df7b34" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
TargetUpdateRequired |
"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIAudiences |
service account issuer changed from to https://kubernetes.default.svc | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"apiServerArguments\": map[string]any{\n+ \t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+ \t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\t\"etcd-servers\": []any{string(\"https://192.168.32.10:2379\")},\n+ \t\t\t\"tls-cipher-suites\": []any{\n+ \t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+ \t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t},\n )\n" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-5f845c897b to 1 from 0 | |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-59948648c9-krt4p |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTemplates |
templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"] | |
openshift-network-diagnostics |
multus |
network-check-target-w2lss |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing | |
openshift-network-diagnostics |
kubelet |
network-check-target-w2lss |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9724d2036305cbd729e1f484c5bad89971de977fff8a6723fef1873858dd1123" already present on machine | |
openshift-network-diagnostics |
kubelet |
network-check-target-w2lss |
Created |
Created container: network-check-target-container | |
openshift-network-diagnostics |
kubelet |
network-check-target-w2lss |
Started |
Started container network-check-target-container | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-5f845c897b |
SuccessfulCreate |
Created pod: route-controller-manager-5f845c897b-hhhv6 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAuditProfile |
AuditProfile changed from '%!s(<nil>)' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]' | |
openshift-controller-manager |
default-scheduler |
controller-manager-5c6c4578c-plvql |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIServerURL |
loginURL changed from to https://api.sno.openstack.lab:6443 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\"oauthConfig\": map[string]any{\n+ \t\t\t\"assetPublicURL\": string(\"\"),\n+ \t\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n+ \t\t\t\"templates\": map[string]any{\n+ \t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t},\n+ \t\t\t\"tokenConfig\": map[string]any{\n+ \t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+ \t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+ \t\t\t},\n+ \t\t},\n+ \t\t\"serverArguments\": map[string]any{\n+ \t\t\t\"audit-log-format\": []any{string(\"json\")},\n+ \t\t\t\"audit-log-maxbackup\": []any{string(\"10\")},\n+ \t\t\t\"audit-log-maxsize\": []any{string(\"100\")},\n+ \t\t\t\"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")},\n+ \t\t\t\"audit-policy-file\": []any{string(\"/var/run/configmaps/audit/audit.\"...)},\n+ \t\t},\n+ \t\t\"servingInfo\": map[string]any{\n+ \t\t\t\"cipherSuites\": []any{\n+ \t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+ \t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+ \t},\n )\n" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from Unknown to False ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5c6c4578c |
SuccessfulCreate |
Created pod: controller-manager-5c6c4578c-plvql | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager |
replicaset-controller |
controller-manager-59948648c9 |
SuccessfulDelete |
Deleted pod: controller-manager-59948648c9-krt4p | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-74b7b57c65-d6v67 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e438b814f8e16f00b3fc4b69991af80eee79ae111d2a707f34aa64b2ccbb6eb" | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-5c6c4578c to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-59948648c9 to 0 from 1 | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodesReadyChanged |
All master nodes are ready |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6b958b6f94-w74zr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3ce2cbf1032ad0f24f204db73687002fcf302e86ebde3945801c74351b64576" | |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-864c894b4d-xbpkz |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-864c894b4d-xbpkz |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-6b958b6f94-w74zr |
AddedInterface |
Add eth0 [10.128.0.27/23] from ovn-kubernetes | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-864c894b4d |
SuccessfulDelete |
Deleted pod: route-controller-manager-864c894b4d-xbpkz | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTokenConfig |
accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400) | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" | |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-59948648c9-krt4p |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
NamespaceUpdated |
Updated Namespace/openshift-etcd because it changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ServiceCreated |
Created Service/apiserver -n openshift-kube-apiserver because it was missing | |
openshift-kube-storage-version-migrator |
multus |
migrator-74b7b57c65-d6v67 |
AddedInterface |
Add eth0 [10.128.0.28/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing | |
openshift-service-ca |
multus |
service-ca-77c99c46b8-6cntk |
AddedInterface |
Add eth0 [10.128.0.31/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-864c894b4d to 0 from 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing | |
| (x2) | openshift-route-controller-manager |
default-scheduler |
route-controller-manager-5f845c897b-hhhv6 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-5c6c4578c-plvql |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-5c6c4578c-plvql to master-0 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found" to "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-controller-manager because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
TargetConfigDeleted |
Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceCreated |
Created Service/scheduler -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator |
authentication-operator |
CSRApproval |
The CSR "system:openshift:openshift-authenticator-pwb88" has been approved | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing | |
| (x4) | openshift-cluster-version |
kubelet |
cluster-version-operator-77dfcc565f-nqpsd |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
RequiredInstallerResourcesMissing |
configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0 |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-ca-bundle -n openshift-config because it was missing | |
| (x4) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-85cff47f46-4gv5j |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" to "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
CSRCreated |
A csr "system:openshift:openshift-authenticator-pwb88" is created for OpenShiftAuthenticatorCertRequester | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
| (x4) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-85cff47f46-4gv5j |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver: namespaces "openshift-oauth-apiserver" not found | |
openshift-config-operator |
kubelet |
openshift-config-operator-68758cbcdb-zh8g5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3b8d91a25eeb9f02041e947adb3487da3e7ab8449d3d2ad015827e7954df7b34" in 4.654s (4.654s including waiting). Image size: 490455952 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-56fcb6cc5f-4xwp2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f952cec1e5332b84bdffa249cd426f39087058d6544ddcec650a414c15a9b68" in 4.353s (4.353s including waiting). Image size: 489528665 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveServiceCAConfigMap |
observed change in config | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceUpdated |
Updated Service/etcd -n openshift-etcd because it changed | |
openshift-network-operator |
kubelet |
iptables-alerter-bkmlp |
Created |
Created container: iptables-alerter | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "extendedArguments": map[string]any{"cluster-cidr": []any{string("10.128.0.0/16")}, "cluster-name": []any{string("sno-d7v9r")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "service-cluster-ip-range": []any{string("172.30.0.0/16")}}, "featureGates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, + "serviceServingCert": map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), + }, "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")}, } | |
openshift-network-operator |
kubelet |
iptables-alerter-bkmlp |
Started |
Started container iptables-alerter | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/etcd -n openshift-etcd-operator because it was missing | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-74b7b57c65-d6v67 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e438b814f8e16f00b3fc4b69991af80eee79ae111d2a707f34aa64b2ccbb6eb" in 3.74s (3.74s including waiting). Image size: 437737925 bytes. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/etcd-minimal -n openshift-etcd-operator because it was missing | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-7bf7f6b755-sh6qf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8375671da86aa527ee7e291d86971b0baa823ffc7663b5a983084456e76c0f59" in 4.951s (4.951s including waiting). Image size: 506741476 bytes. | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-6b958b6f94-w74zr |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-6b958b6f94-w74zr became leader | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-74b7b57c65-d6v67 |
Started |
Started container graceful-termination | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-5f845c897b-hhhv6 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-5f845c897b-hhhv6 to master-0 | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-oauth-apiserver namespace | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6b958b6f94-w74zr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3ce2cbf1032ad0f24f204db73687002fcf302e86ebde3945801c74351b64576" in 3.777s (3.777s including waiting). Image size: 458169255 bytes. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-56fcb6cc5f-4xwp2 |
Created |
Created container: copy-operator-controller-manifests | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-56fcb6cc5f-4xwp2 |
Started |
Started container copy-operator-controller-manifests | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-56fcb6cc5f-4xwp2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86af77350cfe6fd69280157e4162aa0147873d9431c641ae4ad3e881ff768a73" | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-74b7b57c65-d6v67 |
Created |
Created container: migrator | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-74b7b57c65-d6v67 |
Started |
Started container migrator | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-74b7b57c65-d6v67 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e438b814f8e16f00b3fc4b69991af80eee79ae111d2a707f34aa64b2ccbb6eb" already present on machine | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-74b7b57c65-d6v67 |
Created |
Created container: graceful-termination | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-oauth-apiserver because it was missing | |
| (x2) | openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "operator" changed from "" to "4.18.29" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.18.29" |
| (x2) | openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.18.29" |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: status.versions changed from [] to [{"csi-snapshot-controller" "4.18.29"} {"operator" "4.18.29"}] | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded set to Unknown (""),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"feature-gates" "4.18.29"} {"operator" "4.18.29"}] | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded changed from Unknown to False ("All is well") | |
| (x2) | openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "feature-gates" changed from "" to "4.18.29" |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-5c6c4578c-plvql |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod -n openshift-etcd because it was missing | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
ConfigOperatorStatusChanged |
Operator conditions defaulted: [{OperatorAvailable True 2025-12-04 11:37:53 +0000 UTC AsExpected } {OperatorProgressing False 2025-12-04 11:37:53 +0000 UTC AsExpected } {OperatorUpgradeable True 2025-12-04 11:37:53 +0000 UTC AsExpected }] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing | |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-77c99c46b8-6cntk_9be60211-458a-4f3c-858a-f4d29caa3356 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"etcd-pod-0\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.18.29"}] | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "operator" changed from "" to "4.18.29" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-68758cbcdb-zh8g5_d97b179e-cc62-4a6f-8500-0c4520f148b7 became leader | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorVersionChanged |
clusteroperator/service-ca version "operator" changed from "" to "4.18.29" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.18.29"}] | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated") | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-7bf7f6b755-sh6qf_1aa494ed-4816-41da-9ea6-6c72d75ace19 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-1 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379 | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-85cff47f46-4gv5j |
AddedInterface |
Add eth0 [10.128.0.7/23] from ovn-kubernetes | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded changed from Unknown to False ("All is well"),Available changed from Unknown to False ("APIServicesAvailable: endpoints \"api\" not found") | |
| (x5) | openshift-ingress-operator |
kubelet |
ingress-operator-8649c48786-cx2b2 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
| (x5) | openshift-dns-operator |
kubelet |
dns-operator-7c56cf9b74-xz27r |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from Unknown to False ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,StreamingCollectionEncodingToJSON=false,StreamingCollectionEncodingToProtobuf=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
RoutingConfigSubdomainChanged |
Domain changed from "" to "apps.sno.openstack.lab" | |
openshift-cluster-version |
kubelet |
cluster-version-operator-77dfcc565f-nqpsd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_ExternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_ExternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceCreated |
Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-ControlPlaneNodeAdminClient-certrotationcontroller |
kube-apiserver-operator |
RotationError |
configmaps "kube-control-plane-signer-ca" already exists | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
| (x5) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-6fb9f88b7-tgvfl |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "apiServerArguments": map[string]any{ +Â "feature-gates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +Â }, +Â }, +Â "projectConfig": map[string]any{"projectRequestMessage": string("")}, +Â "routingConfig": map[string]any{"subdomain": string("apps.sno.openstack.lab")}, +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, +Â "storageConfig": map[string]any{"urls": []any{string("https://192.168.32.10:2379")}}, Â Â } | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-85cff47f46-4gv5j |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5451aa441e5b8d8689c032405d410c8049a849ef2edf77e5b6a5ce2838c6569b" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"audit\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftAuthenticatorCertRequester is available | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
EtcdEndpointsErrorUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-56fcb6cc5f-4xwp2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86af77350cfe6fd69280157e4162aa0147873d9431c641ae4ad3e881ff768a73" in 2.461s (2.461s including waiting). Image size: 505628211 bytes. | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-1 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-apiserver because it was missing | |
openshift-cluster-version |
kubelet |
cluster-version-operator-77dfcc565f-nqpsd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" in 3.053s (3.053s including waiting). Image size: 512452153 bytes. | |
openshift-cluster-version |
kubelet |
cluster-version-operator-77dfcc565f-nqpsd |
Created |
Created container: cluster-version-operator | |
openshift-cluster-version |
kubelet |
cluster-version-operator-77dfcc565f-nqpsd |
Started |
Started container cluster-version-operator | |
| (x5) | openshift-controller-manager |
kubelet |
controller-manager-5c6c4578c-plvql |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-1 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-authentication because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-authentication namespace | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
NamespaceCreated |
Created Namespace/openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/etcd-serving-ca -n openshift-apiserver: namespaces "openshift-apiserver" not found | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/catalogd-leader-election-role -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-manager-role because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration-v1beta3 because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/audit-errors -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/api-usage -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-all-bundles-1 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration-v1beta3 because it was missing | |
openshift-cluster-olm-operator |
cluster-olm-operator |
cluster-olm-operator-lock |
LeaderElection |
cluster-olm-operator-56fcb6cc5f-4xwp2_8a588a42-c76b-4afd-857a-e5ea46860a6a became leader | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
NamespaceCreated |
Created Namespace/openshift-catalogd because it was missing | |
| (x2) | openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorVersionChanged |
clusteroperator/olm version "operator" changed from "" to "4.18.29" |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-olm-operator"} {"operator.openshift.io" "olms" "" "cluster"}] to [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}],status.versions changed from [] to [{"operator" "4.18.29"}] | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/clustercatalogs.olm.operatorframework.io because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
NamespaceCreated |
Created Namespace/openshift-operator-controller because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
CustomResourceDefinitionUpdated |
Updated CustomResourceDefinition.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io because it changed | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ServiceAccountCreated |
Created ServiceAccount/catalogd-controller-manager -n openshift-catalogd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_369556f3-f56b-4ce4-81c9-ae023ba4b139 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
TargetConfigDeleted |
Deleted target configmap openshift-kube-apiserver/kubelet-serving-ca because source config does not exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/clusterextensions.olm.operatorframework.io because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded changed from Unknown to False ("All is well") | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found" | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-68758cbcdb-zh8g5_8ec95e5e-a5f8-4a2d-a541-4ee7452e677b became leader | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/catalogd-manager-role -n openshift-config because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operator-controller namespace | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-basic -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-requests -n openshift-kube-apiserver because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ServiceAccountCreated |
Created ServiceAccount/operator-controller-controller-manager -n openshift-operator-controller because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-config because it was missing | |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-68758cbcdb-zh8g5 |
Created |
Created container: openshift-config-operator |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-68758cbcdb-zh8g5 |
Started |
Started container openshift-config-operator |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well" | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ServiceCreated |
Created Service/api -n openshift-oauth-apiserver because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-catalogd namespace | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-metrics-reader because it was missing | |
openshift-config-operator |
kubelet |
openshift-config-operator-68758cbcdb-zh8g5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3b8d91a25eeb9f02041e947adb3487da3e7ab8449d3d2ad015827e7954df7b34" already present on machine | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-leader-election-role -n openshift-operator-controller because it was missing | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-operator-controller because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-proxy-role because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/podsecurity -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-scheduler because it was missing | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ServiceCreated |
Created Service/api -n openshift-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady" | |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-5f845c897b-hhhv6 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/catalogd-leader-election-rolebinding -n openshift-catalogd because it was missing | |
| (x53) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0 |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-5f845c897b-hhhv6 |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-1 -n openshift-etcd because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-editor-role because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
TargetConfigDeleted |
Deleted target configmap openshift-config-managed/kubelet-serving-ca because source config does not exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 2 triggered by "optional secret/serving-cert has been created" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" | |
openshift-cluster-node-tuning-operator |
default-scheduler |
tuned-dp5s8 |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-dp5s8 to master-0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_ExternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_ExternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-85cff47f46-4gv5j |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5451aa441e5b8d8689c032405d410c8049a849ef2edf77e5b6a5ce2838c6569b" in 5.786s (5.786s including waiting). Image size: 672407260 bytes. | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" architecture="amd64" | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-dp5s8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5451aa441e5b8d8689c032405d410c8049a849ef2edf77e5b6a5ce2838c6569b" already present on machine | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-85cff47f46-4gv5j_f2670b1b-f99d-44e8-a975-3fabcb7f9f2b |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-85cff47f46-4gv5j_f2670b1b-f99d-44e8-a975-3fabcb7f9f2b became leader | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding -n openshift-config because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-dp5s8 | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "All is well" | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-viewer-role because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-proxy-rolebinding because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-editor-role because it was missing | |
openshift-dns-operator |
kubelet |
dns-operator-7c56cf9b74-xz27r |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c1edf52f70bf9b1d1457e0c4111bc79cdaa1edd659ddbdb9d8176eff8b46956" | |
openshift-ingress-operator |
kubelet |
ingress-operator-8649c48786-cx2b2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:831f30660844091d6154e2674d3a9da6f34271bf8a2c40b56f7416066318742b" | |
| (x6) | openshift-multus |
kubelet |
network-metrics-daemon-gp57t |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found |
openshift-ingress-operator |
multus |
ingress-operator-8649c48786-cx2b2 |
AddedInterface |
Add eth0 [10.128.0.20/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ConfigMapCreated |
Created ConfigMap/catalogd-trusted-ca-bundle -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ServiceCreated |
Created Service/catalogd-service -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationCreated |
Created MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-viewer-role because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing | |
| (x6) | openshift-multus |
kubelet |
multus-admission-controller-7dfc5b745f-258xq |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
openshift-dns-operator |
multus |
dns-operator-7c56cf9b74-xz27r |
AddedInterface |
Add eth0 [10.128.0.9/23] from ovn-kubernetes | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-6fb9f88b7-tgvfl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa24edce3d740f84c40018e94cdbf2bc7375268d13d57c2d664e43a46ccea3fc" | |
| (x6) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-7ff994598c-p82nn |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
| (x6) | openshift-marketplace |
kubelet |
marketplace-operator-f797b99b6-hjjrk |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found |
openshift-image-registry |
multus |
cluster-image-registry-operator-6fb9f88b7-tgvfl |
AddedInterface |
Add eth0 [10.128.0.24/23] from ovn-kubernetes | |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-fbc6455c4-5m8ll |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-7cd7dbb44c-vzj4q |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-dp5s8 |
Started |
Started container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-dp5s8 |
Created |
Created container: tuned | |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-67477646d4-7hndf |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_ExternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_ExternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.") | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-97f98c4dd to 1 | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-apiserver because it was missing | |
openshift-apiserver |
replicaset-controller |
apiserver-97f98c4dd |
SuccessfulCreate |
Created pod: apiserver-97f98c4dd-r2hjk | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver |
default-scheduler |
apiserver-97f98c4dd-r2hjk |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-97f98c4dd-r2hjk to master-0 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nRevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine | |
openshift-kube-scheduler |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.34/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_ExternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_ExternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-metrics-reader because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-5c6c4578c to 0 from 1 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-proxy-role because it was missing | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-f4c4cbbd |
SuccessfulCreate |
Created pod: route-controller-manager-f4c4cbbd-5bv8h | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5c6c4578c |
SuccessfulDelete |
Deleted pod: controller-manager-5c6c4578c-plvql | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-666dcff694 to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-5f845c897b to 0 from 1 | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-f4c4cbbd-5bv8h |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.serving-cert.secret | |
openshift-controller-manager |
replicaset-controller |
controller-manager-666dcff694 |
SuccessfulCreate |
Created pod: controller-manager-666dcff694-54zwc | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-f4c4cbbd to 1 from 0 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-5f845c897b |
SuccessfulDelete |
Deleted pod: route-controller-manager-5f845c897b-hhhv6 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-controller-manager because it was missing | |
| (x4) | openshift-apiserver |
kubelet |
apiserver-97f98c4dd-r2hjk |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-route-controller-manager |
kubelet |
route-controller-manager-5f845c897b-hhhv6 |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : object "openshift-route-controller-manager"/"client-ca" not registered | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-5f845c897b-hhhv6 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : object "openshift-route-controller-manager"/"serving-cert" not registered | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-1 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-config because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler: cause by changes in data.pod.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-1 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-controller-manager |
default-scheduler |
controller-manager-666dcff694-54zwc |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationUpdated |
Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-f4c4cbbd-5bv8h |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-f4c4cbbd-5bv8h to master-0 | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ServiceCreated |
Created Service/oauth-openshift -n openshift-authentication because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 2 triggered by "optional secret/serving-cert has been created" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-1 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-olm-operator |
CatalogdDeploymentCatalogdControllerManager-catalogddeploymentcatalogdcontrollermanager-deployment-controller--catalogddeploymentcatalogdcontrollermanager |
cluster-olm-operator |
DeploymentCreated |
Created Deployment.apps/catalogd-controller-manager -n openshift-catalogd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-leader-election-rolebinding -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/client-ca -n openshift-kube-apiserver: cause by changes in data.ca-bundle.crt | |
openshift-catalogd |
deployment-controller |
catalogd-controller-manager |
ScalingReplicaSet |
Scaled up replica set catalogd-controller-manager-7cc89f4c4c to 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-controller-manager because it was missing | |
| (x20) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-ingress-operator |
kubelet |
ingress-operator-8649c48786-cx2b2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:831f30660844091d6154e2674d3a9da6f34271bf8a2c40b56f7416066318742b" in 15.761s (15.761s including waiting). Image size: 505649178 bytes. | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-proxy-rolebinding because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing changed from Unknown to True ("CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment") | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding because it was missing | |
openshift-cluster-olm-operator |
OperatorcontrollerDeploymentOperatorControllerControllerManager-operatorcontrollerdeploymentoperatorcontrollercontrollermanager-deployment-controller--operatorcontrollerdeploymentoperatorcontrollercontrollermanager |
cluster-olm-operator |
DeploymentCreated |
Created Deployment.apps/operator-controller-controller-manager -n openshift-operator-controller because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-666dcff694-54zwc |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-666dcff694-54zwc to master-0 | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/client-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt | |
openshift-dns-operator |
kubelet |
dns-operator-7c56cf9b74-xz27r |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c1edf52f70bf9b1d1457e0c4111bc79cdaa1edd659ddbdb9d8176eff8b46956" in 15.763s (15.763s including waiting). Image size: 462727837 bytes. | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ConfigMapCreated |
Created ConfigMap/operator-controller-trusted-ca-bundle -n openshift-operator-controller because it was missing | |
openshift-catalogd |
replicaset-controller |
catalogd-controller-manager-7cc89f4c4c |
SuccessfulCreate |
Created pod: catalogd-controller-manager-7cc89f4c4c-fd9pv | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-controller |
replicaset-controller |
operator-controller-controller-manager-7cbd59c7f8 |
SuccessfulCreate |
Created pod: operator-controller-controller-manager-7cbd59c7f8-qcz9t | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/kube-scheduler-pod has changed,required configmap/serviceaccount-ca has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing | |
openshift-operator-controller |
deployment-controller |
operator-controller-controller-manager |
ScalingReplicaSet |
Scaled up replica set operator-controller-controller-manager-7cbd59c7f8 to 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: cause by changes in data.ca-bundle.crt | |
openshift-operator-controller |
default-scheduler |
operator-controller-controller-manager-7cbd59c7f8-qcz9t |
Scheduled |
Successfully assigned openshift-operator-controller/operator-controller-controller-manager-7cbd59c7f8-qcz9t to master-0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_ExternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-7467446c4b to 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-1 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-77dfcc565f |
SuccessfulDelete |
Deleted pod: cluster-version-operator-77dfcc565f-nqpsd | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_369556f3-f56b-4ce4-81c9-ae023ba4b139 stopped leading | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-7467446c4b |
SuccessfulCreate |
Created pod: apiserver-7467446c4b-dlj7g | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Killing |
Stopping container installer | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-apiserver because it changed | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
openshift-cluster-version |
kubelet |
cluster-version-operator-77dfcc565f-nqpsd |
Killing |
Stopping container cluster-version-operator | |
openshift-apiserver |
replicaset-controller |
apiserver-97f98c4dd |
SuccessfulDelete |
Deleted pod: apiserver-97f98c4dd-r2hjk | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-7467446c4b-dlj7g |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-7467446c4b-dlj7g to master-0 | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-6fb9f88b7-tgvfl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa24edce3d740f84c40018e94cdbf2bc7375268d13d57c2d664e43a46ccea3fc" in 15.79s (15.79s including waiting). Image size: 543227406 bytes. | |
openshift-catalogd |
default-scheduler |
catalogd-controller-manager-7cc89f4c4c-fd9pv |
Scheduled |
Successfully assigned openshift-catalogd/catalogd-controller-manager-7cc89f4c4c-fd9pv to master-0 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-97f98c4dd to 0 from 1 | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled down replica set cluster-version-operator-77dfcc565f to 0 from 1 | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6cbcc775d9 to 1 from 0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f4d4282cb53325e737ad68abbfcb70687ae04fb50353f4f0ba0ba5703b15009a" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-etcd |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.36/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
marketplace-operator-f797b99b6-hjjrk |
AddedInterface |
Add eth0 [10.128.0.14/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-route-controller-manager |
multus |
route-controller-manager-f4c4cbbd-5bv8h |
AddedInterface |
Add eth0 [10.128.0.37/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-f4c4cbbd-5bv8h |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c416b201d480bddb5a4960ec42f4740761a1335001cf84ba5ae19ad6857771b1" | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-6fb9f88b7-tgvfl_f86fd9fc-9a0b-42c8-9102-96ca48ea8651 became leader | |
| (x6) | openshift-apiserver |
kubelet |
apiserver-97f98c4dd-r2hjk |
FailedMount |
MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available" | |
openshift-monitoring |
multus |
cluster-monitoring-operator-7ff994598c-p82nn |
AddedInterface |
Add eth0 [10.128.0.12/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-7ff994598c-p82nn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3a77aa4d03b89ea284e3467a268e5989a77a2ef63e685eb1d5c5ea5b3922b7a" | |
openshift-dns-operator |
cluster-dns-operator |
dns-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-ingress-operator |
kubelet |
ingress-operator-8649c48786-cx2b2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-ingress-operator |
kubelet |
ingress-operator-8649c48786-cx2b2 |
Created |
Created container: kube-rbac-proxy | |
openshift-dns-operator |
kubelet |
dns-operator-7c56cf9b74-xz27r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-dns-operator |
kubelet |
dns-operator-7c56cf9b74-xz27r |
Started |
Started container dns-operator | |
openshift-dns-operator |
kubelet |
dns-operator-7c56cf9b74-xz27r |
Created |
Created container: dns-operator | |
openshift-ingress-operator |
kubelet |
ingress-operator-8649c48786-cx2b2 |
Started |
Started container kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
multus |
package-server-manager-67477646d4-7hndf |
AddedInterface |
Add eth0 [10.128.0.13/23] from ovn-kubernetes | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" | |
openshift-operator-lifecycle-manager |
multus |
catalog-operator-fbc6455c4-5m8ll |
AddedInterface |
Add eth0 [10.128.0.22/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" | |
openshift-apiserver |
default-scheduler |
apiserver-6cbcc775d9-jrlx4 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ServiceCreated |
Created Service/operator-controller-controller-manager-metrics-service -n openshift-operator-controller because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-operator-lifecycle-manager |
multus |
olm-operator-7cd7dbb44c-vzj4q |
AddedInterface |
Add eth0 [10.128.0.15/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-7cd7dbb44c-vzj4q |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" | |
openshift-multus |
multus |
network-metrics-daemon-gp57t |
AddedInterface |
Add eth0 [10.128.0.4/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver |
replicaset-controller |
apiserver-6cbcc775d9 |
SuccessfulCreate |
Created pod: apiserver-6cbcc775d9-jrlx4 | |
openshift-oauth-apiserver |
multus |
apiserver-7467446c4b-dlj7g |
AddedInterface |
Add eth0 [10.128.0.39/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-7dfc5b745f-258xq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4ecc5bac651ff1942865baee5159582e9602c89b47eeab18400a32abcba8f690" | |
openshift-multus |
multus |
multus-admission-controller-7dfc5b745f-258xq |
AddedInterface |
Add eth0 [10.128.0.17/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-fbc6455c4-5m8ll |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-7cc89f4c4c-fd9pv |
Started |
Started container kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-67477646d4-7hndf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-dns namespace | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-authentication because it was missing | |
openshift-dns-operator |
kubelet |
dns-operator-7c56cf9b74-xz27r |
Created |
Created container: kube-rbac-proxy | |
openshift-dns-operator |
kubelet |
dns-operator-7c56cf9b74-xz27r |
Started |
Started container kube-rbac-proxy | |
openshift-controller-manager |
multus |
controller-manager-666dcff694-54zwc |
AddedInterface |
Add eth0 [10.128.0.38/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-67477646d4-7hndf |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-version |
default-scheduler |
cluster-version-operator-6d5d5dcc89-cw2hx |
Scheduled |
Successfully assigned openshift-cluster-version/cluster-version-operator-6d5d5dcc89-cw2hx to master-0 | |
openshift-controller-manager |
kubelet |
controller-manager-666dcff694-54zwc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eddedae7578d79b5a3f748000ae5c00b9f14a04710f9f9ec7b52fc569be5dfb8" | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-67477646d4-7hndf |
Started |
Started container kube-rbac-proxy | |
openshift-ingress-operator |
ingress_controller |
default |
Admitted |
ingresscontroller passed validation | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-6d5d5dcc89 |
SuccessfulCreate |
Created pod: cluster-version-operator-6d5d5dcc89-cw2hx | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-67477646d4-7hndf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" | |
openshift-ingress-operator |
certificate_controller |
router-ca |
CreatedWildcardCACert |
Created a default wildcard CA certificate | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-7cc89f4c4c-fd9pv |
Created |
Created container: kube-rbac-proxy | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-7cc89f4c4c-fd9pv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-catalogd |
multus |
catalogd-controller-manager-7cc89f4c4c-fd9pv |
AddedInterface |
Add eth0 [10.128.0.40/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-7467446c4b-dlj7g |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91af633e585621630c40d14f188e37d36b44678d0a59e582d850bf8d593d3a0c" | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-xg7vh | |
openshift-apiserver |
default-scheduler |
apiserver-6cbcc775d9-jrlx4 |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-6cbcc775d9-jrlx4 to master-0 | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-6d5d5dcc89 to 1 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-multus |
kubelet |
network-metrics-daemon-gp57t |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2632d7f05d5a992e91038ded81c715898f3fe803420a9b67a0201e9fd8075213" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-controller-manager: cause by changes in data.config.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-1 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
marketplace-operator-f797b99b6-hjjrk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7664a2d4cb10e82ed32abbf95799f43fc3d10135d7dd94799730de504a89680a" | |
openshift-operator-controller |
multus |
operator-controller-controller-manager-7cbd59c7f8-qcz9t |
AddedInterface |
Add eth0 [10.128.0.41/23] from ovn-kubernetes | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-7cbd59c7f8-qcz9t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-dns |
default-scheduler |
dns-default-xg7vh |
Scheduled |
Successfully assigned openshift-dns/dns-default-xg7vh to master-0 | |
openshift-ingress |
replicaset-controller |
router-default-5465c8b4db |
SuccessfulCreate |
Created pod: router-default-5465c8b4db-58d52 | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_24c3cd95-5f05-4668-945c-5fee4fae08e7 became leader | |
openshift-kube-scheduler |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-7cbd59c7f8-qcz9t |
Created |
Created container: kube-rbac-proxy | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress namespace | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-qq64m | |
openshift-dns |
kubelet |
dns-default-xg7vh |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found | |
openshift-apiserver |
kubelet |
apiserver-6cbcc775d9-jrlx4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df606f3b71d4376d1a2108c09f0d3dab455fc30bcb67c60e91590c105e9025bf" | |
openshift-apiserver |
multus |
apiserver-6cbcc775d9-jrlx4 |
AddedInterface |
Add eth0 [10.128.0.42/23] from ovn-kubernetes | |
openshift-ingress |
default-scheduler |
router-default-5465c8b4db-58d52 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-ingress |
deployment-controller |
router-default |
ScalingReplicaSet |
Scaled up replica set router-default-5465c8b4db to 1 | |
openshift-operator-controller |
operator-controller-controller-manager-7cbd59c7f8-qcz9t_4095f076-77a6-4207-b971-71ba3951a9b4 |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-7cbd59c7f8-qcz9t_4095f076-77a6-4207-b971-71ba3951a9b4 became leader | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-7cbd59c7f8-qcz9t |
Started |
Started container kube-rbac-proxy | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available" | |
openshift-dns |
default-scheduler |
node-resolver-qq64m |
Scheduled |
Successfully assigned openshift-dns/node-resolver-qq64m to master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-catalogd |
catalogd-controller-manager-7cc89f4c4c-fd9pv_cd8bd47a-1d4e-47b1-a099-fc3de2af4b8f |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-7cc89f4c4c-fd9pv_cd8bd47a-1d4e-47b1-a099-fc3de2af4b8f became leader | |
openshift-dns |
multus |
dns-default-xg7vh |
AddedInterface |
Add eth0 [10.128.0.43/23] from ovn-kubernetes | |
openshift-config-managed |
certificate_publisher_controller |
router-certs |
PublishedRouterCertificates |
Published router certificates | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-controller-manager because it was missing | |
openshift-dns |
kubelet |
node-resolver-qq64m |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:79f99fd6cce984287932edf0d009660bb488d663081f3d62ec3b23bc8bfbf6c2" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-ingress-operator |
certificate_controller |
default |
CreatedDefaultCertificate |
Created default wildcard certificate "router-certs-default" | |
openshift-dns |
kubelet |
dns-default-xg7vh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb928c13a46d3fb45f4a881892d023a92d610a5430be0ffd916aaf8da8e7d297" | |
openshift-config-managed |
certificate_publisher_controller |
default-ingress-cert |
PublishedRouterCA |
Published "default-ingress-cert" in "openshift-config-managed" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager: cause by changes in data.config.yaml | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/config has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/kube-scheduler-pod has changed,required configmap/serviceaccount-ca has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine | |
| (x51) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-f4c4cbbd to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-846b467b5c to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-666dcff694 to 0 from 1 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-7d958ff6f6 to 1 from 0 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-f4c4cbbd |
SuccessfulDelete |
Deleted pod: route-controller-manager-f4c4cbbd-5bv8h | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7d958ff6f6 |
SuccessfulCreate |
Created pod: controller-manager-7d958ff6f6-b8lzt | |
openshift-controller-manager |
default-scheduler |
controller-manager-7d958ff6f6-b8lzt |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap,data.openshift-route-controller-manager.serving-cert.secret | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/client-ca -n openshift-route-controller-manager: cause by changes in data.ca-bundle.crt | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-controller-manager |
replicaset-controller |
controller-manager-666dcff694 |
SuccessfulDelete |
Deleted pod: controller-manager-666dcff694-54zwc | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/client-ca -n openshift-controller-manager: cause by changes in data.ca-bundle.crt | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" architecture="amd64" | |
openshift-authentication-operator |
cluster-authentication-operator-routercertsdomainvalidationcontroller |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-846b467b5c |
SuccessfulCreate |
Created pod: route-controller-manager-846b467b5c-thc5v | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: " map[string]any{\n \t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.sno.openstack.lab:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n \t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n \t\"servingInfo\": map[string]any{\n \t\t\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n \t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t\"namedCertificates\": []any{\n+ \t\t\tmap[string]any{\n+ \t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"names\": []any{string(\"*.apps.sno.openstack.lab\")},\n+ \t\t\t},\n+ \t\t},\n \t},\n \t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n }\n" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveRouterSecret |
namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "names":[]interface {}{"*.apps.sno.openstack.lab"}}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-trust-distribution-trustdistributioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "All is well" | |
openshift-marketplace |
kubelet |
marketplace-operator-f797b99b6-hjjrk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7664a2d4cb10e82ed32abbf95799f43fc3d10135d7dd94799730de504a89680a" in 10.388s (10.388s including waiting). Image size: 452589750 bytes. | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" | |
openshift-controller-manager |
kubelet |
controller-manager-666dcff694-54zwc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eddedae7578d79b5a3f748000ae5c00b9f14a04710f9f9ec7b52fc569be5dfb8" in 10.188s (10.188s including waiting). Image size: 552673986 bytes. | |
openshift-multus |
kubelet |
multus-admission-controller-7dfc5b745f-258xq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4ecc5bac651ff1942865baee5159582e9602c89b47eeab18400a32abcba8f690" in 10.571s (10.571s including waiting). Image size: 451039520 bytes. | |
openshift-multus |
kubelet |
network-metrics-daemon-gp57t |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2632d7f05d5a992e91038ded81c715898f3fe803420a9b67a0201e9fd8075213" in 10.387s (10.387s including waiting). Image size: 443291941 bytes. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-7ff994598c-p82nn |
Started |
Started container cluster-monitoring-operator | |
openshift-multus |
kubelet |
multus-admission-controller-7dfc5b745f-258xq |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
network-metrics-daemon-gp57t |
Created |
Created container: network-metrics-daemon | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-7cd7dbb44c-vzj4q |
Created |
Created container: olm-operator | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-7cd7dbb44c-vzj4q |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" in 12.694s (12.694s including waiting). Image size: 857069957 bytes. | |
openshift-multus |
kubelet |
network-metrics-daemon-gp57t |
Started |
Started container network-metrics-daemon | |
openshift-multus |
kubelet |
network-metrics-daemon-gp57t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-f4c4cbbd-5bv8h |
Started |
Started container route-controller-manager | |
openshift-machine-api |
deployment-controller |
control-plane-machine-set-operator |
ScalingReplicaSet |
Scaled up replica set control-plane-machine-set-operator-7df95c79b5 to 1 | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-67477646d4-7hndf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" in 12.198s (12.198s including waiting). Image size: 857069957 bytes. | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-f4c4cbbd-5bv8h |
Created |
Created container: route-controller-manager | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-apiserver |
kubelet |
apiserver-6cbcc775d9-jrlx4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df606f3b71d4376d1a2108c09f0d3dab455fc30bcb67c60e91590c105e9025bf" in 10.568s (10.568s including waiting). Image size: 583836304 bytes. | |
openshift-dns |
kubelet |
node-resolver-qq64m |
Started |
Started container dns-node-resolver | |
openshift-dns |
kubelet |
node-resolver-qq64m |
Created |
Created container: dns-node-resolver | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-666dcff694-54zwc became leader | |
openshift-oauth-apiserver |
kubelet |
apiserver-7467446c4b-dlj7g |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91af633e585621630c40d14f188e37d36b44678d0a59e582d850bf8d593d3a0c" in 12.203s (12.203s including waiting). Image size: 499798563 bytes. | |
openshift-kube-controller-manager |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.46/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-f4c4cbbd-5bv8h |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c416b201d480bddb5a4960ec42f4740761a1335001cf84ba5ae19ad6857771b1" in 12.348s (12.348s including waiting). Image size: 481559117 bytes. | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-7ff994598c-p82nn |
Created |
Created container: cluster-monitoring-operator | |
openshift-multus |
kubelet |
multus-admission-controller-7dfc5b745f-258xq |
Started |
Started container multus-admission-controller | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine | |
openshift-kube-scheduler |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.47/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-multus |
kubelet |
multus-admission-controller-7dfc5b745f-258xq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-7ff994598c-p82nn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3a77aa4d03b89ea284e3467a268e5989a77a2ef63e685eb1d5c5ea5b3922b7a" in 12.556s (12.556s including waiting). Image size: 478917802 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-dns |
kubelet |
dns-default-xg7vh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb928c13a46d3fb45f4a881892d023a92d610a5430be0ffd916aaf8da8e7d297" in 9.574s (9.574s including waiting). Image size: 478642572 bytes. | |
openshift-kube-apiserver |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.45/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-controller-manager |
kubelet |
controller-manager-666dcff694-54zwc |
Created |
Created container: controller-manager | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-fbc6455c4-5m8ll |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" in 12.362s (12.362s including waiting). Image size: 857069957 bytes. | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-fbc6455c4-5m8ll |
Created |
Created container: catalog-operator | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager |
kubelet |
controller-manager-666dcff694-54zwc |
Started |
Started container controller-manager | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-f4c4cbbd-5bv8h |
Killing |
Stopping container route-controller-manager | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-dns |
kubelet |
dns-default-xg7vh |
Created |
Created container: dns | |
openshift-dns |
kubelet |
dns-default-xg7vh |
Started |
Started container dns | |
openshift-apiserver |
kubelet |
apiserver-6cbcc775d9-jrlx4 |
Created |
Created container: fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-6cbcc775d9-jrlx4 |
Started |
Started container fix-audit-permissions | |
openshift-dns |
kubelet |
dns-default-xg7vh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-apiserver |
kubelet |
apiserver-6cbcc775d9-jrlx4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df606f3b71d4376d1a2108c09f0d3dab455fc30bcb67c60e91590c105e9025bf" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-7dfc5b745f-258xq |
Started |
Started container kube-rbac-proxy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.16:46654->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.16:42373->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready" | |
openshift-multus |
kubelet |
multus-admission-controller-7dfc5b745f-258xq |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.16:46654->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready" | |
openshift-dns |
kubelet |
dns-default-xg7vh |
Created |
Created container: kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-xg7vh |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-7c85c4dffd-xv2wn |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-admission-webhook-7c85c4dffd |
SuccessfulCreate |
Created pod: prometheus-operator-admission-webhook-7c85c4dffd-xv2wn | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-f4c4cbbd-5bv8h_4f9b9748-3e03-4be7-91d8-9b9b5396130d became leader | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-lifecycle-manager |
package-server-manager-67477646d4-7hndf_82aab290-170a-44ef-abf0-39b8795ad511 |
packageserver-controller-lock |
LeaderElection |
package-server-manager-67477646d4-7hndf_82aab290-170a-44ef-abf0-39b8795ad511 became leader | |
openshift-monitoring |
deployment-controller |
prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-admission-webhook-7c85c4dffd to 1 | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-7cd7dbb44c-vzj4q |
Started |
Started container olm-operator | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-fbc6455c4-5m8ll |
Started |
Started container catalog-operator | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-master-0 |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-62r5r" has been approved | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-master-0 |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-6959d" has been approved | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-machine-api |
default-scheduler |
control-plane-machine-set-operator-7df95c79b5-7w5lm |
Scheduled |
Successfully assigned openshift-machine-api/control-plane-machine-set-operator-7df95c79b5-7w5lm to master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.16:42373->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-62r5r" is created for OpenShiftMonitoringClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-6959d" is created for OpenShiftMonitoringTelemeterClientCertRequester | |
openshift-controller-manager |
kubelet |
controller-manager-666dcff694-54zwc |
Killing |
Stopping container controller-manager | |
openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-7df95c79b5 |
SuccessfulCreate |
Created pod: control-plane-machine-set-operator-7df95c79b5-7w5lm | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-multus |
kubelet |
network-metrics-daemon-gp57t |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
network-metrics-daemon-gp57t |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-7467446c4b-dlj7g |
Created |
Created container: fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-7467446c4b-dlj7g |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-7467446c4b-dlj7g |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91af633e585621630c40d14f188e37d36b44678d0a59e582d850bf8d593d3a0c" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringClientCertRequester is available | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing | |
openshift-marketplace |
default-scheduler |
redhat-operators-7f22v |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-7f22v to master-0 | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver |
kubelet |
apiserver-6cbcc775d9-jrlx4 |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7df95c79b5-7w5lm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd3e9f8f00a59bda7483ec7dc8a0ed602f9ca30e3d72b22072dbdf2819da3f61" | |
| (x2) | openshift-route-controller-manager |
default-scheduler |
route-controller-manager-846b467b5c-thc5v |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-apiserver |
kubelet |
apiserver-6cbcc775d9-jrlx4 |
Created |
Created container: openshift-apiserver-check-endpoints | |
openshift-oauth-apiserver |
kubelet |
apiserver-7467446c4b-dlj7g |
Started |
Started container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-7467446c4b-dlj7g |
Created |
Created container: oauth-apiserver | |
openshift-marketplace |
default-scheduler |
certified-operators-ghr5b |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-ghr5b to master-0 | |
openshift-machine-api |
multus |
control-plane-machine-set-operator-7df95c79b5-7w5lm |
AddedInterface |
Add eth0 [10.128.0.48/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
redhat-operators-7f22v |
AddedInterface |
Add eth0 [10.128.0.49/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-7f22v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-controller-manager |
default-scheduler |
controller-manager-7d958ff6f6-b8lzt |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7d958ff6f6-b8lzt to master-0 | |
openshift-apiserver |
kubelet |
apiserver-6cbcc775d9-jrlx4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-6cbcc775d9-jrlx4 |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-6cbcc775d9-jrlx4 |
Created |
Created container: openshift-apiserver | |
openshift-marketplace |
multus |
certified-operators-ghr5b |
AddedInterface |
Add eth0 [10.128.0.51/23] from ovn-kubernetes | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7d958ff6f6-b8lzt became leader | |
openshift-marketplace |
default-scheduler |
community-operators-2p882 |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-2p882 to master-0 | |
| (x10) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
NoOperatorGroup |
csv in namespace with no operatorgroups |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager |
kubelet |
controller-manager-7d958ff6f6-b8lzt |
Started |
Started container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-7d958ff6f6-b8lzt |
Created |
Created container: controller-manager | |
openshift-marketplace |
kubelet |
certified-operators-ghr5b |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-ghr5b |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-7f22v |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-7f22v |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-7f22v |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-ghr5b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-controller-manager |
multus |
controller-manager-7d958ff6f6-b8lzt |
AddedInterface |
Add eth0 [10.128.0.50/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-7d958ff6f6-b8lzt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eddedae7578d79b5a3f748000ae5c00b9f14a04710f9f9ec7b52fc569be5dfb8" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-2p882 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
openshift-marketplace |
kubelet |
community-operators-2p882 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-2p882 |
Started |
Started container extract-utilities | |
openshift-marketplace |
multus |
community-operators-2p882 |
AddedInterface |
Add eth0 [10.128.0.52/23] from ovn-kubernetes | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-846b467b5c-thc5v |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-846b467b5c-thc5v to master-0 | |
openshift-apiserver |
kubelet |
apiserver-6cbcc775d9-jrlx4 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok livez check failed | |
openshift-apiserver |
kubelet |
apiserver-6cbcc775d9-jrlx4 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-ghr5b |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-cluster-machine-approver |
default-scheduler |
machine-approver-f797d8546-qvgbq |
Scheduled |
Successfully assigned openshift-cluster-machine-approver/machine-approver-f797d8546-qvgbq to master-0 | |
openshift-marketplace |
default-scheduler |
redhat-marketplace-krbbd |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-krbbd to master-0 | |
openshift-route-controller-manager |
multus |
route-controller-manager-846b467b5c-thc5v |
AddedInterface |
Add eth0 [10.128.0.53/23] from ovn-kubernetes | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-node namespace | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-f797d8546 to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift namespace | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-f797d8546 |
SuccessfulCreate |
Created pod: machine-approver-f797d8546-qvgbq | |
openshift-marketplace |
kubelet |
community-operators-2p882 |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
multus |
redhat-marketplace-krbbd |
AddedInterface |
Add eth0 [10.128.0.54/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7df95c79b5-7w5lm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd3e9f8f00a59bda7483ec7dc8a0ed602f9ca30e3d72b22072dbdf2819da3f61" in 3.741s (3.742s including waiting). Image size: 465144618 bytes. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-846b467b5c-thc5v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c416b201d480bddb5a4960ec42f4740761a1335001cf84ba5ae19ad6857771b1" already present on machine | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-f797d8546-qvgbq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/config has changed" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-846b467b5c-thc5v |
Created |
Created container: route-controller-manager | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-f797d8546-qvgbq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8cc27777e72233024fe84ee1faa168aec715a0b24912a3ce70715ddccba328df" | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-f797d8546-qvgbq |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-f797d8546-qvgbq |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager: cause by changes in data.pod.yaml | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-846b467b5c-thc5v |
Started |
Started container route-controller-manager | |
openshift-cloud-credential-operator |
default-scheduler |
cloud-credential-operator-698c598cfc-95jdn |
Scheduled |
Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-698c598cfc-95jdn to master-0 | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-846b467b5c-thc5v_e6929345-e307-497a-bdf0-99ff141b1762 became leader | |
openshift-marketplace |
kubelet |
redhat-marketplace-krbbd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-krbbd |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-krbbd |
Started |
Started container extract-utilities | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
Created <unknown>/v1.oauth.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
Created <unknown>/v1.user.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
OpenShiftAPICheckFailed |
"oauth.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
OpenShiftAPICheckFailed |
"user.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-marketplace |
kubelet |
redhat-marketplace-krbbd |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.quota.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.image.openshift.io because it was missing | ||
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-machine-api |
control-plane-machine-set-operator-7df95c79b5-7w5lm_34f8451d-b999-4044-97f9-b59dd9ff9e3d |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-7df95c79b5-7w5lm_34f8451d-b999-4044-97f9-b59dd9ff9e3d became leader | |
openshift-cluster-samples-operator |
deployment-controller |
cluster-samples-operator |
ScalingReplicaSet |
Scaled up replica set cluster-samples-operator-797cfd8b47 to 1 | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.apps.openshift.io because it was missing | ||
openshift-cluster-samples-operator |
replicaset-controller |
cluster-samples-operator-797cfd8b47 |
SuccessfulCreate |
Created pod: cluster-samples-operator-797cfd8b47-8wqgp | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.authorization.openshift.io because it was missing | ||
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-797cfd8b47-8wqgp |
FailedMount |
MountVolume.SetUp failed for volume "samples-operator-tls" : secret "samples-operator-tls" not found | |
openshift-cluster-samples-operator |
default-scheduler |
cluster-samples-operator-797cfd8b47-8wqgp |
Scheduled |
Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-797cfd8b47-8wqgp to master-0 | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.build.openshift.io because it was missing | ||
openshift-cloud-credential-operator |
replicaset-controller |
cloud-credential-operator-698c598cfc |
SuccessfulCreate |
Created pod: cloud-credential-operator-698c598cfc-95jdn | |
openshift-cloud-credential-operator |
deployment-controller |
cloud-credential-operator |
ScalingReplicaSet |
Scaled up replica set cloud-credential-operator-698c598cfc to 1 | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.project.openshift.io because it was missing | ||
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-698c598cfc-95jdn |
Started |
Started container kube-rbac-proxy | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-698c598cfc-95jdn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61664aa69b33349cc6de45e44ae6033e7f483c034ea01c0d9a8ca08a12d88e3a" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.18.29" | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-698c598cfc-95jdn |
Created |
Created container: kube-rbac-proxy | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-698c598cfc-95jdn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4" | |
openshift-cloud-credential-operator |
multus |
cloud-credential-operator-698c598cfc-95jdn |
AddedInterface |
Add eth0 [10.128.0.55/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Killing |
Stopping container installer | |
openshift-machine-api |
default-scheduler |
cluster-baremetal-operator-78f758c7b9-zgkh5 |
Scheduled |
Successfully assigned openshift-machine-api/cluster-baremetal-operator-78f758c7b9-zgkh5 to master-0 | |
openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-78f758c7b9 |
SuccessfulCreate |
Created pod: cluster-baremetal-operator-78f758c7b9-zgkh5 | |
openshift-machine-api |
deployment-controller |
cluster-baremetal-operator |
ScalingReplicaSet |
Scaled up replica set cluster-baremetal-operator-78f758c7b9 to 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.route.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.security.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.18.29"}] to [{"operator" "4.18.29"} {"openshift-apiserver" "4.18.29"}] | |
openshift-machine-api |
deployment-controller |
cluster-autoscaler-operator |
ScalingReplicaSet |
Scaled up replica set cluster-autoscaler-operator-5f49d774cd to 1 | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"image.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-797cfd8b47-8wqgp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1386b0fcb731d843f15fb64532f8b676c927821d69dd3d4503c973c3e2a04216" | |
openshift-insights |
default-scheduler |
insights-operator-55965856b6-skbmb |
Scheduled |
Successfully assigned openshift-insights/insights-operator-55965856b6-skbmb to master-0 | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-operator-dc5d7666f |
SuccessfulCreate |
Created pod: machine-config-operator-dc5d7666f-p2cmn | |
openshift-insights |
replicaset-controller |
insights-operator-55965856b6 |
SuccessfulCreate |
Created pod: insights-operator-55965856b6-skbmb | |
openshift-insights |
deployment-controller |
insights-operator |
ScalingReplicaSet |
Scaled up replica set insights-operator-55965856b6 to 1 | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-78f758c7b9-zgkh5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a92c310ce30dcb3de85d6aac868e0d80919670fa29ef83d55edd96b0cae35563" | |
openshift-machine-api |
default-scheduler |
cluster-autoscaler-operator-5f49d774cd-894dk |
Scheduled |
Successfully assigned openshift-machine-api/cluster-autoscaler-operator-5f49d774cd-894dk to master-0 | |
openshift-machine-api |
multus |
cluster-baremetal-operator-78f758c7b9-zgkh5 |
AddedInterface |
Add eth0 [10.128.0.57/23] from ovn-kubernetes | |
openshift-machine-config-operator |
deployment-controller |
machine-config-operator |
ScalingReplicaSet |
Scaled up replica set machine-config-operator-dc5d7666f to 1 | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-cluster-samples-operator |
multus |
cluster-samples-operator-797cfd8b47-8wqgp |
AddedInterface |
Add eth0 [10.128.0.56/23] from ovn-kubernetes | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.template.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-machine-api |
replicaset-controller |
cluster-autoscaler-operator-5f49d774cd |
SuccessfulCreate |
Created pod: cluster-autoscaler-operator-5f49d774cd-894dk | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Killing |
Stopping container installer | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-apiserver" changed from "" to "4.18.29" | |
openshift-marketplace |
default-scheduler |
redhat-operators-wksdw |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-wksdw to master-0 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.29"}] to [{"operator" "4.18.29"} {"oauth-apiserver" "4.18.29"}] | |
openshift-marketplace |
multus |
redhat-operators-wksdw |
AddedInterface |
Add eth0 [10.128.0.58/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-wksdw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-5f49d774cd-894dk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72bbe2c638872937108f647950ab8ad35c0428ca8ecc6a39a8314aace7d95078" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
redhat-operators-wksdw |
Created |
Created container: extract-utilities | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-74f484689c to 1 | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-5f49d774cd-894dk |
Started |
Started container kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
replicaset-controller |
packageserver-675f5c767c |
SuccessfulCreate |
Created pod: packageserver-675f5c767c-mtdrq | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-5f49d774cd-894dk |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-5f49d774cd-894dk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-cluster-storage-operator |
deployment-controller |
cluster-storage-operator |
ScalingReplicaSet |
Scaled up replica set cluster-storage-operator-dcf7fc84b to 1 | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-5f49d774cd-894dk |
AddedInterface |
Add eth0 [10.128.0.59/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
replicaset-controller |
cluster-storage-operator-dcf7fc84b |
SuccessfulCreate |
Created pod: cluster-storage-operator-dcf7fc84b-fncfd | |
openshift-machine-config-operator |
default-scheduler |
machine-config-operator-dc5d7666f-p2cmn |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-operator-dc5d7666f-p2cmn to master-0 | |
openshift-machine-config-operator |
multus |
machine-config-operator-dc5d7666f-p2cmn |
AddedInterface |
Add eth0 [10.128.0.61/23] from ovn-kubernetes | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.42:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.42:8443/apis/template.openshift.io/v1: 401" | |
openshift-cluster-storage-operator |
default-scheduler |
cluster-storage-operator-dcf7fc84b-fncfd |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-dcf7fc84b-fncfd to master-0 | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
RequirementsUnknown |
InstallModes now support target namespaces | |
openshift-operator-lifecycle-manager |
deployment-controller |
packageserver |
ScalingReplicaSet |
Scaled up replica set packageserver-675f5c767c to 1 | |
openshift-insights |
kubelet |
insights-operator-55965856b6-skbmb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33a20002692769235e95271ab071783c57ff50681088fa1035b86af31e73cf20" | |
openshift-operator-lifecycle-manager |
default-scheduler |
packageserver-675f5c767c-mtdrq |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/packageserver-675f5c767c-mtdrq to master-0 | |
openshift-insights |
multus |
insights-operator-55965856b6-skbmb |
AddedInterface |
Add eth0 [10.128.0.60/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-6686654b8d to 1 from 0 | |
| (x4) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed |
| (x4) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed |
openshift-controller-manager |
default-scheduler |
controller-manager-6686654b8d-rrndk |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-machine-api |
default-scheduler |
machine-api-operator-88d48b57d-9fjtd |
Scheduled |
Successfully assigned openshift-machine-api/machine-api-operator-88d48b57d-9fjtd to master-0 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-846b467b5c |
SuccessfulDelete |
Deleted pod: route-controller-manager-846b467b5c-thc5v | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-95cb5f987 to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-846b467b5c to 0 from 1 | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-74f484689c |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-74f484689c-jmfn2 | |
openshift-cluster-storage-operator |
multus |
cluster-storage-operator-dcf7fc84b-fncfd |
AddedInterface |
Add eth0 [10.128.0.62/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-846b467b5c-thc5v |
Killing |
Stopping container route-controller-manager | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6686654b8d |
SuccessfulCreate |
Created pod: controller-manager-6686654b8d-rrndk | |
openshift-machine-api |
deployment-controller |
machine-api-operator |
ScalingReplicaSet |
Scaled up replica set machine-api-operator-88d48b57d to 1 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-95cb5f987 |
SuccessfulCreate |
Created pod: route-controller-manager-95cb5f987-46bsk | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-7d958ff6f6 to 0 from 1 | |
openshift-marketplace |
default-scheduler |
community-operators-rxhpq |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-rxhpq to master-0 | |
openshift-marketplace |
kubelet |
redhat-operators-wksdw |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap |
openshift-controller-manager |
kubelet |
controller-manager-7d958ff6f6-b8lzt |
Killing |
Stopping container controller-manager | |
openshift-marketplace |
kubelet |
redhat-operators-wksdw |
Started |
Started container extract-utilities | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7d958ff6f6 |
SuccessfulDelete |
Deleted pod: controller-manager-7d958ff6f6-b8lzt | |
openshift-cloud-controller-manager-operator |
default-scheduler |
cluster-cloud-controller-manager-operator-74f484689c-jmfn2 |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-74f484689c-jmfn2 to master-0 | |
openshift-machine-api |
replicaset-controller |
machine-api-operator-88d48b57d |
SuccessfulCreate |
Created pod: machine-api-operator-88d48b57d-9fjtd | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-dcf7fc84b-fncfd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97d26892192b552c16527bf2771e1b86528ab581a02dd9279cdf71c194830e3e" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.18.29" | |
openshift-marketplace |
default-scheduler |
certified-operators-8qs8v |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-8qs8v to master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.",Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.18.29"}] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-846b467b5c-thc5v |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.53:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-846b467b5c-thc5v |
ProbeError |
Readiness probe error: Get "https://10.128.0.53:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
| (x2) | openshift-route-controller-manager |
default-scheduler |
route-controller-manager-95cb5f987-46bsk |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-controller-manager |
default-scheduler |
controller-manager-6686654b8d-rrndk |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-6686654b8d-rrndk to master-0 | |
| (x26) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerStuck |
unexpected addresses: 192.168.32.10 |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-95cb5f987-46bsk |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-95cb5f987-46bsk to master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd |
static-pod-installer |
installer-1-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd |
kubelet |
etcd-master-0-master-0 |
Killing |
Stopping container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container setup | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-dc5d7666f-p2cmn |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-dc5d7666f-p2cmn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-krbbd |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 39.689s (39.689s including waiting). Image size: 1129027903 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-krbbd |
Created |
Created container: extract-content | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-5f49d774cd-894dk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72bbe2c638872937108f647950ab8ad35c0428ca8ecc6a39a8314aace7d95078" in 36.853s (36.853s including waiting). Image size: 450841337 bytes. | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-marketplace |
kubelet |
community-operators-2p882 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 43.549s (43.549s including waiting). Image size: 1201434959 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-wksdw |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-wksdw |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 35.549s (35.549s including waiting). Image size: 1610175307 bytes. | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-698c598cfc-95jdn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61664aa69b33349cc6de45e44ae6033e7f483c034ea01c0d9a8ca08a12d88e3a" in 38.596s (38.596s including waiting). Image size: 874825223 bytes. | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-74f484689c-jmfn2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd38b8be3af889b0f97e2df41517c89a11260901432a9a1ee943195bb3a22737" | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-698c598cfc-95jdn |
Created |
Created container: cloud-credential-operator | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-698c598cfc-95jdn |
Started |
Started container cloud-credential-operator | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: setup | |
openshift-insights |
kubelet |
insights-operator-55965856b6-skbmb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33a20002692769235e95271ab071783c57ff50681088fa1035b86af31e73cf20" in 36.626s (36.626s including waiting). Image size: 499125567 bytes. | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-797cfd8b47-8wqgp |
Created |
Created container: cluster-samples-operator | |
openshift-marketplace |
kubelet |
certified-operators-ghr5b |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 44.555s (44.555s including waiting). Image size: 1205106509 bytes. | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-dcf7fc84b-fncfd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97d26892192b552c16527bf2771e1b86528ab581a02dd9279cdf71c194830e3e" in 34.732s (34.732s including waiting). Image size: 508042119 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-ghr5b |
Started |
Started container extract-content | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-797cfd8b47-8wqgp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1386b0fcb731d843f15fb64532f8b676c927821d69dd3d4503c973c3e2a04216" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-7f22v |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 45.6s (45.6s including waiting). Image size: 1610175307 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-7f22v |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-7f22v |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-ghr5b |
Created |
Created container: extract-content | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-78f758c7b9-zgkh5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-797cfd8b47-8wqgp |
Started |
Started container cluster-samples-operator | |
openshift-marketplace |
kubelet |
redhat-marketplace-krbbd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-797cfd8b47-8wqgp |
Created |
Created container: cluster-samples-operator-watch | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-dc5d7666f-p2cmn |
Started |
Started container kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-operators-wksdw |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-2p882 |
Created |
Created container: extract-content | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-78f758c7b9-zgkh5 |
Created |
Created container: baremetal-kube-rbac-proxy | |
openshift-marketplace |
kubelet |
community-operators-2p882 |
Started |
Started container extract-content | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-797cfd8b47-8wqgp |
Started |
Started container cluster-samples-operator-watch | |
openshift-marketplace |
kubelet |
redhat-operators-7f22v |
Killing |
Stopping container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-krbbd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 449ms (449ms including waiting). Image size: 912722556 bytes. | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-78f758c7b9-zgkh5 |
Started |
Started container baremetal-kube-rbac-proxy | |
openshift-marketplace |
kubelet |
redhat-marketplace-krbbd |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-krbbd |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-krbbd |
Created |
Created container: registry-server | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-74f484689c-jmfn2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd38b8be3af889b0f97e2df41517c89a11260901432a9a1ee943195bb3a22737" in 3.397s (3.397s including waiting). Image size: 551889548 bytes. | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-74f484689c-jmfn2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-74f484689c-jmfn2 |
Created |
Created container: kube-rbac-proxy | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-74f484689c-jmfn2 |
Started |
Started container kube-rbac-proxy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-ensure-env-vars | |
| (x3) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://192.168.32.10:10257/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-machine-api |
kubelet |
machine-api-operator-88d48b57d-9fjtd |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-88d48b57d-9fjtd_openshift-machine-api_c50317d3-f7cd-4133-845e-44add57ac378_0(a7ea781eb62388abf3d4251d464a1c215aed96cb83a2b09a854429643ac8236c): error adding pod openshift-machine-api_machine-api-operator-88d48b57d-9fjtd to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a7ea781eb62388abf3d4251d464a1c215aed96cb83a2b09a854429643ac8236c" Netns:"/var/run/netns/f0df9b09-6ce7-41e5-a134-04f90f5a7bad" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-88d48b57d-9fjtd;K8S_POD_INFRA_CONTAINER_ID=a7ea781eb62388abf3d4251d464a1c215aed96cb83a2b09a854429643ac8236c;K8S_POD_UID=c50317d3-f7cd-4133-845e-44add57ac378" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-88d48b57d-9fjtd] networking: Multus: [openshift-machine-api/machine-api-operator-88d48b57d-9fjtd/c50317d3-f7cd-4133-845e-44add57ac378]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-88d48b57d-9fjtd in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-88d48b57d-9fjtd in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-88d48b57d-9fjtd?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-marketplace |
kubelet |
community-operators-rxhpq |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-rxhpq_openshift-marketplace_5ae25dd5-dfdb-42f6-97d0-14ad17743c95_0(430bdcc88caaa1762f9556d2b54d19f20fc0a57316355ff65c95ea047594de49): error adding pod openshift-marketplace_community-operators-rxhpq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"430bdcc88caaa1762f9556d2b54d19f20fc0a57316355ff65c95ea047594de49" Netns:"/var/run/netns/5e9b3bbd-c0ca-4799-bed4-87bb864607e6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-rxhpq;K8S_POD_INFRA_CONTAINER_ID=430bdcc88caaa1762f9556d2b54d19f20fc0a57316355ff65c95ea047594de49;K8S_POD_UID=5ae25dd5-dfdb-42f6-97d0-14ad17743c95" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-rxhpq] networking: Multus: [openshift-marketplace/community-operators-rxhpq/5ae25dd5-dfdb-42f6-97d0-14ad17743c95]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-rxhpq in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-rxhpq in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rxhpq?timeout=1m0s": context deadline exceeded (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-675f5c767c-mtdrq |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-675f5c767c-mtdrq_openshift-operator-lifecycle-manager_855d7874-16b1-47d0-82f6-d2b0c89b9a84_0(896aef82e1d6cb28b4d2159f1388c189773aa632f6eace1e5d447c4fed3a97a7): error adding pod openshift-operator-lifecycle-manager_packageserver-675f5c767c-mtdrq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"896aef82e1d6cb28b4d2159f1388c189773aa632f6eace1e5d447c4fed3a97a7" Netns:"/var/run/netns/8aec4def-e56e-4b8f-ae8d-b88220ffeb08" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-675f5c767c-mtdrq;K8S_POD_INFRA_CONTAINER_ID=896aef82e1d6cb28b4d2159f1388c189773aa632f6eace1e5d447c4fed3a97a7;K8S_POD_UID=855d7874-16b1-47d0-82f6-d2b0c89b9a84" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-675f5c767c-mtdrq] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-675f5c767c-mtdrq/855d7874-16b1-47d0-82f6-d2b0c89b9a84]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-675f5c767c-mtdrq in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-675f5c767c-mtdrq in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-675f5c767c-mtdrq?timeout=1m0s": context deadline exceeded ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-marketplace |
kubelet |
certified-operators-8qs8v |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-8qs8v_openshift-marketplace_b620f29d-dcde-4f98-9fb6-dd479dcdcf7c_0(f318ad61b0980d76ec38b561966b7553196e5e4cd94d3483dfaba5a5a10a15dc): error adding pod openshift-marketplace_certified-operators-8qs8v to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f318ad61b0980d76ec38b561966b7553196e5e4cd94d3483dfaba5a5a10a15dc" Netns:"/var/run/netns/1b61f214-7db3-4926-85d9-cbacbb41d76b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-8qs8v;K8S_POD_INFRA_CONTAINER_ID=f318ad61b0980d76ec38b561966b7553196e5e4cd94d3483dfaba5a5a10a15dc;K8S_POD_UID=b620f29d-dcde-4f98-9fb6-dd479dcdcf7c" Path:"" ERRORED: error configuring pod [openshift-marketplace/certified-operators-8qs8v] networking: Multus: [openshift-marketplace/certified-operators-8qs8v/b620f29d-dcde-4f98-9fb6-dd479dcdcf7c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod certified-operators-8qs8v in out of cluster comm: SetNetworkStatus: failed to update the pod certified-operators-8qs8v in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods certified-operators-8qs8v) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_780ea907-dec5-426e-9de2-158f59c09f71_0(39d9f19548dde8f3c52d8466f9b78f509959aa0e34a1e3caa685d387b613b98e): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"39d9f19548dde8f3c52d8466f9b78f509959aa0e34a1e3caa685d387b613b98e" Netns:"/var/run/netns/f5b9decb-f3bb-461d-8a7f-0333c0e461c8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=39d9f19548dde8f3c52d8466f9b78f509959aa0e34a1e3caa685d387b613b98e;K8S_POD_UID=780ea907-dec5-426e-9de2-158f59c09f71" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/780ea907-dec5-426e-9de2-158f59c09f71]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-controller-manager |
kubelet |
controller-manager-6686654b8d-rrndk |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6686654b8d-rrndk_openshift-controller-manager_24506aa4-ab78-49df-bb58-59093498f13d_0(80084e6d846b9ab14d689920224de665b033fb5b2fda92ba002f913e4ec488c5): error adding pod openshift-controller-manager_controller-manager-6686654b8d-rrndk to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"80084e6d846b9ab14d689920224de665b033fb5b2fda92ba002f913e4ec488c5" Netns:"/var/run/netns/d7e501b9-d461-4a07-adb8-dcf7d33c4eaf" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6686654b8d-rrndk;K8S_POD_INFRA_CONTAINER_ID=80084e6d846b9ab14d689920224de665b033fb5b2fda92ba002f913e4ec488c5;K8S_POD_UID=24506aa4-ab78-49df-bb58-59093498f13d" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6686654b8d-rrndk] networking: Multus: [openshift-controller-manager/controller-manager-6686654b8d-rrndk/24506aa4-ab78-49df-bb58-59093498f13d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6686654b8d-rrndk in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6686654b8d-rrndk in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6686654b8d-rrndk?timeout=1m0s": context deadline exceeded ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-95cb5f987-46bsk_openshift-route-controller-manager_e4d7939a-5961-4608-b910-73e71aa55bf6_0(9519a074afd2181f00fe2f6116d50a06885facad038c0cb4784a9787e54f04e8): error adding pod openshift-route-controller-manager_route-controller-manager-95cb5f987-46bsk to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9519a074afd2181f00fe2f6116d50a06885facad038c0cb4784a9787e54f04e8" Netns:"/var/run/netns/e22c2237-8bd6-44a7-8811-bb69ea806f2b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-95cb5f987-46bsk;K8S_POD_INFRA_CONTAINER_ID=9519a074afd2181f00fe2f6116d50a06885facad038c0cb4784a9787e54f04e8;K8S_POD_UID=e4d7939a-5961-4608-b910-73e71aa55bf6" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-95cb5f987-46bsk] networking: Multus: [openshift-route-controller-manager/route-controller-manager-95cb5f987-46bsk/e4d7939a-5961-4608-b910-73e71aa55bf6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-95cb5f987-46bsk in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-95cb5f987-46bsk in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-95cb5f987-46bsk?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_38e7be62-e4f5-42ba-89f0-83aca874a092_0(07b26902c86c19476e0b1231675a3a2190e6256c4ece69045cbb7f750231e1e8): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"07b26902c86c19476e0b1231675a3a2190e6256c4ece69045cbb7f750231e1e8" Netns:"/var/run/netns/f85fdd93-a4af-4e9a-9d1a-34ea0b226cb0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=07b26902c86c19476e0b1231675a3a2190e6256c4ece69045cbb7f750231e1e8;K8S_POD_UID=38e7be62-e4f5-42ba-89f0-83aca874a092" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/38e7be62-e4f5-42ba-89f0-83aca874a092]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-network-node-identity |
kubelet |
network-node-identity-f8hvq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-f8hvq |
Started |
Started container approver | |
openshift-network-node-identity |
kubelet |
network-node-identity-f8hvq |
Created |
Created container: approver | |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-f797b99b6-hjjrk |
ProbeError |
Liveness probe error: Get "http://10.128.0.14:8080/healthz": dial tcp 10.128.0.14:8080: connect: connection refused body: |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-f797b99b6-hjjrk |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.14:8080/healthz": dial tcp 10.128.0.14:8080: connect: connection refused |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-f797b99b6-hjjrk |
ProbeError |
Readiness probe error: Get "http://10.128.0.14:8080/healthz": dial tcp 10.128.0.14:8080: connect: connection refused body: |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-f797b99b6-hjjrk |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.14:8080/healthz": dial tcp 10.128.0.14:8080: connect: connection refused |
openshift-machine-api |
kubelet |
machine-api-operator-88d48b57d-9fjtd |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-88d48b57d-9fjtd_openshift-machine-api_c50317d3-f7cd-4133-845e-44add57ac378_0(af14fe68c8083f31be169d50a9e29c2a4846a7a7df4592fdfda1587a1b3dc88a): error adding pod openshift-machine-api_machine-api-operator-88d48b57d-9fjtd to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"af14fe68c8083f31be169d50a9e29c2a4846a7a7df4592fdfda1587a1b3dc88a" Netns:"/var/run/netns/d8d0e1e9-6bc3-435a-b27b-ff4ea6b4043d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-88d48b57d-9fjtd;K8S_POD_INFRA_CONTAINER_ID=af14fe68c8083f31be169d50a9e29c2a4846a7a7df4592fdfda1587a1b3dc88a;K8S_POD_UID=c50317d3-f7cd-4133-845e-44add57ac378" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-88d48b57d-9fjtd] networking: Multus: [openshift-machine-api/machine-api-operator-88d48b57d-9fjtd/c50317d3-f7cd-4133-845e-44add57ac378]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-88d48b57d-9fjtd in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-88d48b57d-9fjtd in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-88d48b57d-9fjtd?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-7cc89f4c4c-fd9pv |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.40:8081/healthz": dial tcp 10.128.0.40:8081: connect: connection refused | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-7cbd59c7f8-qcz9t |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.41:8081/healthz": dial tcp 10.128.0.41:8081: connect: connection refused | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-7cbd59c7f8-qcz9t |
ProbeError |
Liveness probe error: Get "http://10.128.0.41:8081/healthz": dial tcp 10.128.0.41:8081: connect: connection refused body: | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-7cc89f4c4c-fd9pv |
ProbeError |
Liveness probe error: Get "http://10.128.0.40:8081/healthz": dial tcp 10.128.0.40:8081: connect: connection refused body: | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_780ea907-dec5-426e-9de2-158f59c09f71_0(339a8481bda65b7069e3ce1cf4b8ccd417d7360be9c3218744cf0e0f593c5a48): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"339a8481bda65b7069e3ce1cf4b8ccd417d7360be9c3218744cf0e0f593c5a48" Netns:"/var/run/netns/a79669c1-92bf-44f8-8b5e-600fe749e0c5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=339a8481bda65b7069e3ce1cf4b8ccd417d7360be9c3218744cf0e0f593c5a48;K8S_POD_UID=780ea907-dec5-426e-9de2-158f59c09f71" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/780ea907-dec5-426e-9de2-158f59c09f71]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": context deadline exceeded ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-controller-manager |
kubelet |
controller-manager-6686654b8d-rrndk |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6686654b8d-rrndk_openshift-controller-manager_24506aa4-ab78-49df-bb58-59093498f13d_0(dd455792c3c0c609a825558a90a6fdaff70c7f5c28eaeed93416f3bf654b7a11): error adding pod openshift-controller-manager_controller-manager-6686654b8d-rrndk to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dd455792c3c0c609a825558a90a6fdaff70c7f5c28eaeed93416f3bf654b7a11" Netns:"/var/run/netns/0314a257-ead8-4ab3-afce-07d51d75d96d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6686654b8d-rrndk;K8S_POD_INFRA_CONTAINER_ID=dd455792c3c0c609a825558a90a6fdaff70c7f5c28eaeed93416f3bf654b7a11;K8S_POD_UID=24506aa4-ab78-49df-bb58-59093498f13d" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6686654b8d-rrndk] networking: Multus: [openshift-controller-manager/controller-manager-6686654b8d-rrndk/24506aa4-ab78-49df-bb58-59093498f13d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6686654b8d-rrndk in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6686654b8d-rrndk in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6686654b8d-rrndk?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-marketplace |
kubelet |
certified-operators-8qs8v |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-8qs8v_openshift-marketplace_b620f29d-dcde-4f98-9fb6-dd479dcdcf7c_0(fec003270bef36eba6a5a9668b9257cb885914f50d7e88621385b420031b7359): error adding pod openshift-marketplace_certified-operators-8qs8v to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fec003270bef36eba6a5a9668b9257cb885914f50d7e88621385b420031b7359" Netns:"/var/run/netns/c137c805-c71b-4d0f-abf7-55926f050522" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-8qs8v;K8S_POD_INFRA_CONTAINER_ID=fec003270bef36eba6a5a9668b9257cb885914f50d7e88621385b420031b7359;K8S_POD_UID=b620f29d-dcde-4f98-9fb6-dd479dcdcf7c" Path:"" ERRORED: error configuring pod [openshift-marketplace/certified-operators-8qs8v] networking: Multus: [openshift-marketplace/certified-operators-8qs8v/b620f29d-dcde-4f98-9fb6-dd479dcdcf7c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod certified-operators-8qs8v in out of cluster comm: SetNetworkStatus: failed to update the pod certified-operators-8qs8v in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods certified-operators-8qs8v) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_38e7be62-e4f5-42ba-89f0-83aca874a092_0(acf5a98f600bd6c83a28743f2310f72c5e66e0b63da427c52d49a771c7352c34): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"acf5a98f600bd6c83a28743f2310f72c5e66e0b63da427c52d49a771c7352c34" Netns:"/var/run/netns/05576a5b-072e-4965-beb5-cc6ad5a5f495" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=acf5a98f600bd6c83a28743f2310f72c5e66e0b63da427c52d49a771c7352c34;K8S_POD_UID=38e7be62-e4f5-42ba-89f0-83aca874a092" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/38e7be62-e4f5-42ba-89f0-83aca874a092]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-95cb5f987-46bsk_openshift-route-controller-manager_e4d7939a-5961-4608-b910-73e71aa55bf6_0(aea5fc223c451dccb2eb2f140edba281c9bae377ca308d1bba7b28f3cd4f529d): error adding pod openshift-route-controller-manager_route-controller-manager-95cb5f987-46bsk to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"aea5fc223c451dccb2eb2f140edba281c9bae377ca308d1bba7b28f3cd4f529d" Netns:"/var/run/netns/13e2d887-db77-4ff8-8664-99e6741ac5b6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-95cb5f987-46bsk;K8S_POD_INFRA_CONTAINER_ID=aea5fc223c451dccb2eb2f140edba281c9bae377ca308d1bba7b28f3cd4f529d;K8S_POD_UID=e4d7939a-5961-4608-b910-73e71aa55bf6" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-95cb5f987-46bsk] networking: Multus: [openshift-route-controller-manager/route-controller-manager-95cb5f987-46bsk/e4d7939a-5961-4608-b910-73e71aa55bf6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-95cb5f987-46bsk in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-95cb5f987-46bsk in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-95cb5f987-46bsk?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-675f5c767c-mtdrq |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-675f5c767c-mtdrq_openshift-operator-lifecycle-manager_855d7874-16b1-47d0-82f6-d2b0c89b9a84_0(b5269202f491395be1819acda94362a4aa1e2a9aed4f6562916b06241919d0f4): error adding pod openshift-operator-lifecycle-manager_packageserver-675f5c767c-mtdrq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b5269202f491395be1819acda94362a4aa1e2a9aed4f6562916b06241919d0f4" Netns:"/var/run/netns/59d1358a-5227-419d-89e4-6be9839ac080" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-675f5c767c-mtdrq;K8S_POD_INFRA_CONTAINER_ID=b5269202f491395be1819acda94362a4aa1e2a9aed4f6562916b06241919d0f4;K8S_POD_UID=855d7874-16b1-47d0-82f6-d2b0c89b9a84" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-675f5c767c-mtdrq] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-675f5c767c-mtdrq/855d7874-16b1-47d0-82f6-d2b0c89b9a84]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-675f5c767c-mtdrq in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-675f5c767c-mtdrq in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-675f5c767c-mtdrq?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-marketplace |
kubelet |
community-operators-rxhpq |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-rxhpq_openshift-marketplace_5ae25dd5-dfdb-42f6-97d0-14ad17743c95_0(652d70c8af64d6dd54a5fa84dd4bc7a269eecdaf0219d1fb02b060f818f69894): error adding pod openshift-marketplace_community-operators-rxhpq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"652d70c8af64d6dd54a5fa84dd4bc7a269eecdaf0219d1fb02b060f818f69894" Netns:"/var/run/netns/b8ff09ab-23a1-4c12-ac5f-c9e7449a07b2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-rxhpq;K8S_POD_INFRA_CONTAINER_ID=652d70c8af64d6dd54a5fa84dd4bc7a269eecdaf0219d1fb02b060f818f69894;K8S_POD_UID=5ae25dd5-dfdb-42f6-97d0-14ad17743c95" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-rxhpq] networking: Multus: [openshift-marketplace/community-operators-rxhpq/5ae25dd5-dfdb-42f6-97d0-14ad17743c95]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-rxhpq in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-rxhpq in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rxhpq?timeout=1m0s": context deadline exceeded ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x3) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-7cbd59c7f8-qcz9t |
ProbeError |
Readiness probe error: Get "http://10.128.0.41:8081/readyz": dial tcp 10.128.0.41:8081: connect: connection refused body: |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-7cc89f4c4c-fd9pv |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.40:8081/readyz": dial tcp 10.128.0.40:8081: connect: connection refused |
| (x3) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-7cbd59c7f8-qcz9t |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.41:8081/readyz": dial tcp 10.128.0.41:8081: connect: connection refused |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-7cc89f4c4c-fd9pv |
ProbeError |
Readiness probe error: Get "http://10.128.0.40:8081/readyz": dial tcp 10.128.0.40:8081: connect: connection refused body: |
openshift-marketplace |
kubelet |
redhat-operators-wksdw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-operators-wksdw |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-wksdw |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-wksdw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 465ms (465ms including waiting). Image size: 912722556 bytes. | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-machine-api |
kubelet |
machine-api-operator-88d48b57d-9fjtd |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-88d48b57d-9fjtd_openshift-machine-api_c50317d3-f7cd-4133-845e-44add57ac378_0(21bc0e2f8c3e2b30320dbc8a79d7d97b34ea191a5786a54f69ac61586aa362d5): error adding pod openshift-machine-api_machine-api-operator-88d48b57d-9fjtd to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"21bc0e2f8c3e2b30320dbc8a79d7d97b34ea191a5786a54f69ac61586aa362d5" Netns:"/var/run/netns/2052808d-2414-40cf-bf30-07beb222e23d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-88d48b57d-9fjtd;K8S_POD_INFRA_CONTAINER_ID=21bc0e2f8c3e2b30320dbc8a79d7d97b34ea191a5786a54f69ac61586aa362d5;K8S_POD_UID=c50317d3-f7cd-4133-845e-44add57ac378" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-88d48b57d-9fjtd] networking: Multus: [openshift-machine-api/machine-api-operator-88d48b57d-9fjtd/c50317d3-f7cd-4133-845e-44add57ac378]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-88d48b57d-9fjtd in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-88d48b57d-9fjtd in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-88d48b57d-9fjtd?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-marketplace |
kubelet |
certified-operators-8qs8v |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-8qs8v_openshift-marketplace_b620f29d-dcde-4f98-9fb6-dd479dcdcf7c_0(a2f1cab9913e82565ffcd50d70d2eba4367c89704fc9961444619678bf490ab9): error adding pod openshift-marketplace_certified-operators-8qs8v to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a2f1cab9913e82565ffcd50d70d2eba4367c89704fc9961444619678bf490ab9" Netns:"/var/run/netns/fb614bdf-d900-473a-9d5e-c49d2ba563da" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-8qs8v;K8S_POD_INFRA_CONTAINER_ID=a2f1cab9913e82565ffcd50d70d2eba4367c89704fc9961444619678bf490ab9;K8S_POD_UID=b620f29d-dcde-4f98-9fb6-dd479dcdcf7c" Path:"" ERRORED: error configuring pod [openshift-marketplace/certified-operators-8qs8v] networking: Multus: [openshift-marketplace/certified-operators-8qs8v/b620f29d-dcde-4f98-9fb6-dd479dcdcf7c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod certified-operators-8qs8v in out of cluster comm: SetNetworkStatus: failed to update the pod certified-operators-8qs8v in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8qs8v?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_38e7be62-e4f5-42ba-89f0-83aca874a092_0(d1537d2edd419ed8b01c58612db0b8ec6b2e006099ec1b3cdcd856307a088d63): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d1537d2edd419ed8b01c58612db0b8ec6b2e006099ec1b3cdcd856307a088d63" Netns:"/var/run/netns/5a243ddf-ba4d-40c2-894c-e6b946852288" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=d1537d2edd419ed8b01c58612db0b8ec6b2e006099ec1b3cdcd856307a088d63;K8S_POD_UID=38e7be62-e4f5-42ba-89f0-83aca874a092" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/38e7be62-e4f5-42ba-89f0-83aca874a092]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_780ea907-dec5-426e-9de2-158f59c09f71_0(369b0e712547133da414f9c9a855b9d0e50ac7147d0b5cb829923e11cbca3b38): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"369b0e712547133da414f9c9a855b9d0e50ac7147d0b5cb829923e11cbca3b38" Netns:"/var/run/netns/0ef64b83-96aa-4011-8bd8-49872e23d186" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=369b0e712547133da414f9c9a855b9d0e50ac7147d0b5cb829923e11cbca3b38;K8S_POD_UID=780ea907-dec5-426e-9de2-158f59c09f71" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/780ea907-dec5-426e-9de2-158f59c09f71]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-controller-manager |
kubelet |
controller-manager-6686654b8d-rrndk |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6686654b8d-rrndk_openshift-controller-manager_24506aa4-ab78-49df-bb58-59093498f13d_0(2d160b6d52d8b992b0177da71ff6a4532495aa60365c9fbd31af5eef7c1b5925): error adding pod openshift-controller-manager_controller-manager-6686654b8d-rrndk to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2d160b6d52d8b992b0177da71ff6a4532495aa60365c9fbd31af5eef7c1b5925" Netns:"/var/run/netns/e76f4cd1-de22-49cc-8cce-dc93a58a95e5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6686654b8d-rrndk;K8S_POD_INFRA_CONTAINER_ID=2d160b6d52d8b992b0177da71ff6a4532495aa60365c9fbd31af5eef7c1b5925;K8S_POD_UID=24506aa4-ab78-49df-bb58-59093498f13d" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6686654b8d-rrndk] networking: Multus: [openshift-controller-manager/controller-manager-6686654b8d-rrndk/24506aa4-ab78-49df-bb58-59093498f13d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6686654b8d-rrndk in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6686654b8d-rrndk in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6686654b8d-rrndk?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-marketplace |
kubelet |
community-operators-rxhpq |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-rxhpq_openshift-marketplace_5ae25dd5-dfdb-42f6-97d0-14ad17743c95_0(2a08ae233b0c6df618013e5730e922895daf39d0d046d0e5a3e095a6944ada39): error adding pod openshift-marketplace_community-operators-rxhpq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2a08ae233b0c6df618013e5730e922895daf39d0d046d0e5a3e095a6944ada39" Netns:"/var/run/netns/ab61f939-0bac-4447-a61f-6ec731121164" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-rxhpq;K8S_POD_INFRA_CONTAINER_ID=2a08ae233b0c6df618013e5730e922895daf39d0d046d0e5a3e095a6944ada39;K8S_POD_UID=5ae25dd5-dfdb-42f6-97d0-14ad17743c95" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-rxhpq] networking: Multus: [openshift-marketplace/community-operators-rxhpq/5ae25dd5-dfdb-42f6-97d0-14ad17743c95]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-rxhpq in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-rxhpq in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rxhpq?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-machine-api |
kubelet |
machine-api-operator-88d48b57d-9fjtd |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-88d48b57d-9fjtd_openshift-machine-api_c50317d3-f7cd-4133-845e-44add57ac378_0(97bd237a7b73814f3563223ef08fe64c059bb01e4f8387a182eaaae722ea06a1): error adding pod openshift-machine-api_machine-api-operator-88d48b57d-9fjtd to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"97bd237a7b73814f3563223ef08fe64c059bb01e4f8387a182eaaae722ea06a1" Netns:"/var/run/netns/577c5238-fde8-4e8e-9202-5e640ae52ce7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-88d48b57d-9fjtd;K8S_POD_INFRA_CONTAINER_ID=97bd237a7b73814f3563223ef08fe64c059bb01e4f8387a182eaaae722ea06a1;K8S_POD_UID=c50317d3-f7cd-4133-845e-44add57ac378" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-88d48b57d-9fjtd] networking: Multus: [openshift-machine-api/machine-api-operator-88d48b57d-9fjtd/c50317d3-f7cd-4133-845e-44add57ac378]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-88d48b57d-9fjtd in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-88d48b57d-9fjtd in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-88d48b57d-9fjtd?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x6) | openshift-authentication-operator |
kubelet |
authentication-operator-6c968fdfdf-nrrfw |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.11:8443/healthz": dial tcp 10.128.0.11:8443: connect: connection refused |
| (x6) | openshift-authentication-operator |
kubelet |
authentication-operator-6c968fdfdf-nrrfw |
ProbeError |
Liveness probe error: Get "https://10.128.0.11:8443/healthz": dial tcp 10.128.0.11:8443: connect: connection refused body: |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-675f5c767c-mtdrq |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-675f5c767c-mtdrq_openshift-operator-lifecycle-manager_855d7874-16b1-47d0-82f6-d2b0c89b9a84_0(50d5ddf79bd5de442fb43ec994762ef41de345f87e9c87e14eaf6c0f2b22e980): error adding pod openshift-operator-lifecycle-manager_packageserver-675f5c767c-mtdrq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"50d5ddf79bd5de442fb43ec994762ef41de345f87e9c87e14eaf6c0f2b22e980" Netns:"/var/run/netns/9708ead0-bfd0-4a87-9250-4e5efff45dab" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-675f5c767c-mtdrq;K8S_POD_INFRA_CONTAINER_ID=50d5ddf79bd5de442fb43ec994762ef41de345f87e9c87e14eaf6c0f2b22e980;K8S_POD_UID=855d7874-16b1-47d0-82f6-d2b0c89b9a84" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-675f5c767c-mtdrq] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-675f5c767c-mtdrq/855d7874-16b1-47d0-82f6-d2b0c89b9a84]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-675f5c767c-mtdrq in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-675f5c767c-mtdrq in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-675f5c767c-mtdrq?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-95cb5f987-46bsk_openshift-route-controller-manager_e4d7939a-5961-4608-b910-73e71aa55bf6_0(639b179c776f4ece5a98ff46eeb6b6de6d24a404e0d5f5af53ad6080b597a9d1): error adding pod openshift-route-controller-manager_route-controller-manager-95cb5f987-46bsk to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"639b179c776f4ece5a98ff46eeb6b6de6d24a404e0d5f5af53ad6080b597a9d1" Netns:"/var/run/netns/e33289fb-8fac-4c47-9f5b-3d6ca06ce72b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-95cb5f987-46bsk;K8S_POD_INFRA_CONTAINER_ID=639b179c776f4ece5a98ff46eeb6b6de6d24a404e0d5f5af53ad6080b597a9d1;K8S_POD_UID=e4d7939a-5961-4608-b910-73e71aa55bf6" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-95cb5f987-46bsk] networking: Multus: [openshift-route-controller-manager/route-controller-manager-95cb5f987-46bsk/e4d7939a-5961-4608-b910-73e71aa55bf6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-95cb5f987-46bsk in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-95cb5f987-46bsk in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-95cb5f987-46bsk?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-6c8676f99d-7z948 |
BackOff |
Back-off restarting failed container openshift-controller-manager-operator in pod openshift-controller-manager-operator-6c8676f99d-7z948_openshift-controller-manager-operator(3322cc5a-f1f7-4522-b423-19fb7f38cd43) | |
| (x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-7bf7f6b755-sh6qf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8375671da86aa527ee7e291d86971b0baa823ffc7663b5a983084456e76c0f59" already present on machine |
| (x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-765d9ff747-gr68k |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine |
| (x3) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager |
| (x3) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-7bf7f6b755-sh6qf |
Created |
Created container: openshift-apiserver-operator |
| (x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-765d9ff747-gr68k |
Started |
Started container kube-apiserver-operator |
| (x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-765d9ff747-gr68k |
Created |
Created container: kube-apiserver-operator |
| (x3) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager |
| (x3) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-7bf7f6b755-sh6qf |
Started |
Started container openshift-apiserver-operator |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f4d4282cb53325e737ad68abbfcb70687ae04fb50353f4f0ba0ba5703b15009a" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f4d4282cb53325e737ad68abbfcb70687ae04fb50353f4f0ba0ba5703b15009a" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-readyz | |
| (x3) | openshift-config-operator |
kubelet |
openshift-config-operator-68758cbcdb-zh8g5 |
ProbeError |
Liveness probe error: Get "https://10.128.0.10:8443/healthz": dial tcp 10.128.0.10:8443: connect: connection refused body: |
| (x3) | openshift-config-operator |
kubelet |
openshift-config-operator-68758cbcdb-zh8g5 |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.10:8443/healthz": dial tcp 10.128.0.10:8443: connect: connection refused |
| (x6) | openshift-config-operator |
kubelet |
openshift-config-operator-68758cbcdb-zh8g5 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.10:8443/healthz": dial tcp 10.128.0.10:8443: connect: connection refused |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-b9c5dfc78-dcxkw |
BackOff |
Back-off restarting failed container kube-storage-version-migrator-operator in pod kube-storage-version-migrator-operator-b9c5dfc78-dcxkw_openshift-kube-storage-version-migrator-operator(fd9f8671-8066-4990-b45d-8b619aa5d9ec) | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f85974995-g4rwv |
BackOff |
Back-off restarting failed container kube-scheduler-operator-container in pod openshift-kube-scheduler-operator-5f85974995-g4rwv_openshift-kube-scheduler-operator(27576028-d64a-47cd-b76b-d524e41efe37) | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-77758bc754-8smqn |
BackOff |
Back-off restarting failed container service-ca-operator in pod service-ca-operator-77758bc754-8smqn_openshift-service-ca-operator(8e5e33b0-4e2e-464c-ba5b-cbd2048004f8) | |
openshift-authentication-operator |
kubelet |
authentication-operator-6c968fdfdf-nrrfw |
BackOff |
Back-off restarting failed container authentication-operator in pod authentication-operator-6c968fdfdf-nrrfw_openshift-authentication-operator(42b3be0f-1d82-4a64-abb4-0118a6960efd) | |
| (x7) | openshift-config-operator |
kubelet |
openshift-config-operator-68758cbcdb-zh8g5 |
ProbeError |
Readiness probe error: Get "https://10.128.0.10:8443/healthz": dial tcp 10.128.0.10:8443: connect: connection refused body: |
openshift-network-operator |
kubelet |
network-operator-79767b7ff9-5bgzx |
BackOff |
Back-off restarting failed container network-operator in pod network-operator-79767b7ff9-5bgzx_openshift-network-operator(ffaf95be-586c-40fa-b15f-1beaffe7ff1c) | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_24c3cd95-5f05-4668-945c-5fee4fae08e7 stopped leading | |
openshift-machine-api |
kubelet |
machine-api-operator-88d48b57d-9fjtd |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-88d48b57d-9fjtd_openshift-machine-api_c50317d3-f7cd-4133-845e-44add57ac378_0(b8a357e4d9b7049620c368e3117a218cab578e2787de9134003b973d9b4b0f47): error adding pod openshift-machine-api_machine-api-operator-88d48b57d-9fjtd to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b8a357e4d9b7049620c368e3117a218cab578e2787de9134003b973d9b4b0f47" Netns:"/var/run/netns/afdfd7cd-85b7-4aec-840d-af54b181ee50" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-88d48b57d-9fjtd;K8S_POD_INFRA_CONTAINER_ID=b8a357e4d9b7049620c368e3117a218cab578e2787de9134003b973d9b4b0f47;K8S_POD_UID=c50317d3-f7cd-4133-845e-44add57ac378" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-88d48b57d-9fjtd] networking: Multus: [openshift-machine-api/machine-api-operator-88d48b57d-9fjtd/c50317d3-f7cd-4133-845e-44add57ac378]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-88d48b57d-9fjtd in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-88d48b57d-9fjtd in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-88d48b57d-9fjtd?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-cloud-controller-manager-operator |
master-0_18d7c457-b7ff-408e-a628-92a59904506e |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_18d7c457-b7ff-408e-a628-92a59904506e became leader | |
openshift-machine-api |
cluster-autoscaler-operator-5f49d774cd-894dk_1c3e7619-6cc9-4a4c-8c03-8a7ce2ef0a8c |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-5f49d774cd-894dk_1c3e7619-6cc9-4a4c-8c03-8a7ce2ef0a8c became leader | |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-5df5548d54-jjfhq became leader | |
| (x2) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-67477646d4-7hndf |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.13:8080/healthz": dial tcp 10.128.0.13:8080: connect: connection refused |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-5bf4d88c6f-2bpmr |
ProbeError |
Liveness probe error: Get "https://10.128.0.21:8443/healthz": dial tcp 10.128.0.21:8443: connect: connection refused body: |
| (x2) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-67477646d4-7hndf |
ProbeError |
Liveness probe error: Get "http://10.128.0.13:8080/healthz": dial tcp 10.128.0.13:8080: connect: connection refused body: |
| (x2) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-67477646d4-7hndf |
ProbeError |
Readiness probe error: Get "http://10.128.0.13:8080/healthz": dial tcp 10.128.0.13:8080: connect: connection refused body: |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-5bf4d88c6f-2bpmr |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.21:8443/healthz": dial tcp 10.128.0.21:8443: connect: connection refused |
| (x2) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-67477646d4-7hndf |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.13:8080/healthz": dial tcp 10.128.0.13:8080: connect: connection refused |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_38e7be62-e4f5-42ba-89f0-83aca874a092_0(04ebacdbcb644ae006a00ae82dfcd15d62a4254c398f5c5c941eb7cab1b204e1): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"04ebacdbcb644ae006a00ae82dfcd15d62a4254c398f5c5c941eb7cab1b204e1" Netns:"/var/run/netns/7a5fa44b-229f-4be7-a3ea-eefe545a0517" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=04ebacdbcb644ae006a00ae82dfcd15d62a4254c398f5c5c941eb7cab1b204e1;K8S_POD_UID=38e7be62-e4f5-42ba-89f0-83aca874a092" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/38e7be62-e4f5-42ba-89f0-83aca874a092]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_780ea907-dec5-426e-9de2-158f59c09f71_0(bed382eba2a719f3151923c86b8ac4743d06c032477ad90976b29f8a069a781e): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bed382eba2a719f3151923c86b8ac4743d06c032477ad90976b29f8a069a781e" Netns:"/var/run/netns/a65c453e-1eb1-43ed-bfdc-02ac76de4d95" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=bed382eba2a719f3151923c86b8ac4743d06c032477ad90976b29f8a069a781e;K8S_POD_UID=780ea907-dec5-426e-9de2-158f59c09f71" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/780ea907-dec5-426e-9de2-158f59c09f71]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-marketplace |
kubelet |
community-operators-rxhpq |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-rxhpq_openshift-marketplace_5ae25dd5-dfdb-42f6-97d0-14ad17743c95_0(0ecca18a3a940680db77cae1d8b8234a78f0c29efebfeaa15eb406faccfeda9d): error adding pod openshift-marketplace_community-operators-rxhpq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0ecca18a3a940680db77cae1d8b8234a78f0c29efebfeaa15eb406faccfeda9d" Netns:"/var/run/netns/65ce7944-cb85-4670-bb15-4627a14de53b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-rxhpq;K8S_POD_INFRA_CONTAINER_ID=0ecca18a3a940680db77cae1d8b8234a78f0c29efebfeaa15eb406faccfeda9d;K8S_POD_UID=5ae25dd5-dfdb-42f6-97d0-14ad17743c95" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-rxhpq] networking: Multus: [openshift-marketplace/community-operators-rxhpq/5ae25dd5-dfdb-42f6-97d0-14ad17743c95]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-rxhpq in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-rxhpq in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-rxhpq?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-controller-manager |
kubelet |
controller-manager-6686654b8d-rrndk |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6686654b8d-rrndk_openshift-controller-manager_24506aa4-ab78-49df-bb58-59093498f13d_0(cfd724b83db7eda66758ee8447129a7dfeb31809a751bc0358c6f09a9428e9ee): error adding pod openshift-controller-manager_controller-manager-6686654b8d-rrndk to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"cfd724b83db7eda66758ee8447129a7dfeb31809a751bc0358c6f09a9428e9ee" Netns:"/var/run/netns/189d0a6b-f161-4d53-a497-b9eb33e6c80e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6686654b8d-rrndk;K8S_POD_INFRA_CONTAINER_ID=cfd724b83db7eda66758ee8447129a7dfeb31809a751bc0358c6f09a9428e9ee;K8S_POD_UID=24506aa4-ab78-49df-bb58-59093498f13d" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6686654b8d-rrndk] networking: Multus: [openshift-controller-manager/controller-manager-6686654b8d-rrndk/24506aa4-ab78-49df-bb58-59093498f13d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6686654b8d-rrndk in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6686654b8d-rrndk in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6686654b8d-rrndk?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-marketplace |
kubelet |
certified-operators-8qs8v |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-8qs8v_openshift-marketplace_b620f29d-dcde-4f98-9fb6-dd479dcdcf7c_0(e74346efbf42471b8e65ff042f712147ecf9da8b437316001d266d242514be2d): error adding pod openshift-marketplace_certified-operators-8qs8v to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e74346efbf42471b8e65ff042f712147ecf9da8b437316001d266d242514be2d" Netns:"/var/run/netns/dbad1133-6a04-4887-b892-91abbd10de95" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-8qs8v;K8S_POD_INFRA_CONTAINER_ID=e74346efbf42471b8e65ff042f712147ecf9da8b437316001d266d242514be2d;K8S_POD_UID=b620f29d-dcde-4f98-9fb6-dd479dcdcf7c" Path:"" ERRORED: error configuring pod [openshift-marketplace/certified-operators-8qs8v] networking: Multus: [openshift-marketplace/certified-operators-8qs8v/b620f29d-dcde-4f98-9fb6-dd479dcdcf7c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod certified-operators-8qs8v in out of cluster comm: SetNetworkStatus: failed to update the pod certified-operators-8qs8v in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8qs8v?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-cluster-machine-approver |
master-0_70b19edc-2114-4c95-8724-ecf2ab490e55 |
cluster-machine-approver-leader |
LeaderElection |
master-0_70b19edc-2114-4c95-8724-ecf2ab490e55 became leader | |
| (x3) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Readiness probe failed: Get "https://localhost:10357/healthz": dial tcp [::1]:10357: connect: connection refused |
openshift-cloud-controller-manager-operator |
master-0_60050ae3-7159-4aa3-bf6a-658c1cee7966 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_60050ae3-7159-4aa3-bf6a-658c1cee7966 became leader | |
| (x3) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Liveness probe failed: Get "https://localhost:10357/healthz": dial tcp [::1]:10357: connect: connection refused |
| (x3) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-b9c5dfc78-dcxkw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:75d996f6147edb88c09fd1a052099de66638590d7d03a735006244bc9e19f898" already present on machine |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d64c13fe7663a0b4ae61d103b1b7598adcf317a01826f296bcb66b1a2de83c96" already present on machine | |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-6c968fdfdf-nrrfw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e85850a4ae1a1e3ec2c590a4936d640882b6550124da22031c85b526afbf52df" already present on machine |
| (x2) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-56fcb6cc5f-4xwp2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86af77350cfe6fd69280157e4162aa0147873d9431c641ae4ad3e881ff768a73" already present on machine |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-6c968fdfdf-nrrfw |
Started |
Started container authentication-operator |
| (x3) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-56fcb6cc5f-4xwp2 |
Created |
Created container: cluster-olm-operator |
| (x3) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-b9c5dfc78-dcxkw |
Created |
Created container: kube-storage-version-migrator-operator |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-6c968fdfdf-nrrfw |
Created |
Created container: authentication-operator |
| (x2) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-67477646d4-7hndf |
Started |
Started container package-server-manager |
| (x3) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-b9c5dfc78-dcxkw |
Started |
Started container kube-storage-version-migrator-operator |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
| (x3) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-56fcb6cc5f-4xwp2 |
Started |
Started container cluster-olm-operator |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_41a2b1c1-6ae9-450a-aa38-1e4b9f013507 became leader | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_09028e06-d3f3-4e0a-b813-c6c383898ead became leader | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-675f5c767c-mtdrq |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-675f5c767c-mtdrq_openshift-operator-lifecycle-manager_855d7874-16b1-47d0-82f6-d2b0c89b9a84_0(08bff72cbcb288755b606d271a90590d5de4969e52f24b14dfba197207176aab): error adding pod openshift-operator-lifecycle-manager_packageserver-675f5c767c-mtdrq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"08bff72cbcb288755b606d271a90590d5de4969e52f24b14dfba197207176aab" Netns:"/var/run/netns/a579b1af-3832-4c4e-8db5-2e91b975a454" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-675f5c767c-mtdrq;K8S_POD_INFRA_CONTAINER_ID=08bff72cbcb288755b606d271a90590d5de4969e52f24b14dfba197207176aab;K8S_POD_UID=855d7874-16b1-47d0-82f6-d2b0c89b9a84" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-675f5c767c-mtdrq] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-675f5c767c-mtdrq/855d7874-16b1-47d0-82f6-d2b0c89b9a84]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-675f5c767c-mtdrq in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-675f5c767c-mtdrq in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods packageserver-675f5c767c-mtdrq) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-95cb5f987-46bsk_openshift-route-controller-manager_e4d7939a-5961-4608-b910-73e71aa55bf6_0(6d3b3f16703c335a06514238952905813ef880930cb5654115a32b53928d3f6c): error adding pod openshift-route-controller-manager_route-controller-manager-95cb5f987-46bsk to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6d3b3f16703c335a06514238952905813ef880930cb5654115a32b53928d3f6c" Netns:"/var/run/netns/e47c14cb-63b2-4e70-a12c-95b0aeef97d3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-95cb5f987-46bsk;K8S_POD_INFRA_CONTAINER_ID=6d3b3f16703c335a06514238952905813ef880930cb5654115a32b53928d3f6c;K8S_POD_UID=e4d7939a-5961-4608-b910-73e71aa55bf6" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-95cb5f987-46bsk] networking: Multus: [openshift-route-controller-manager/route-controller-manager-95cb5f987-46bsk/e4d7939a-5961-4608-b910-73e71aa55bf6]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-95cb5f987-46bsk in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-95cb5f987-46bsk in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-95cb5f987-46bsk?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-insights |
kubelet |
insights-operator-55965856b6-skbmb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33a20002692769235e95271ab071783c57ff50681088fa1035b86af31e73cf20" already present on machine |
| (x3) | openshift-insights |
kubelet |
insights-operator-55965856b6-skbmb |
Created |
Created container: insights-operator |
| (x5) | openshift-kube-controller-manager |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.68/23] from ovn-kubernetes |
| (x5) | openshift-controller-manager |
multus |
controller-manager-6686654b8d-rrndk |
AddedInterface |
Add eth0 [10.128.0.69/23] from ovn-kubernetes |
| (x5) | openshift-marketplace |
multus |
community-operators-rxhpq |
AddedInterface |
Add eth0 [10.128.0.65/23] from ovn-kubernetes |
| (x5) | openshift-kube-scheduler |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.63/23] from ovn-kubernetes |
| (x5) | openshift-marketplace |
multus |
certified-operators-8qs8v |
AddedInterface |
Add eth0 [10.128.0.67/23] from ovn-kubernetes |
| (x3) | openshift-insights |
kubelet |
insights-operator-55965856b6-skbmb |
Started |
Started container insights-operator |
openshift-machine-api |
kubelet |
machine-api-operator-88d48b57d-9fjtd |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-88d48b57d-9fjtd_openshift-machine-api_c50317d3-f7cd-4133-845e-44add57ac378_0(5d2384d86be2f95b3966a53aba3cff3d8ac082133332b4d7e800c63a2574036a): error adding pod openshift-machine-api_machine-api-operator-88d48b57d-9fjtd to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5d2384d86be2f95b3966a53aba3cff3d8ac082133332b4d7e800c63a2574036a" Netns:"/var/run/netns/5587937f-ddb7-4990-8879-cff0c5f010ae" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-88d48b57d-9fjtd;K8S_POD_INFRA_CONTAINER_ID=5d2384d86be2f95b3966a53aba3cff3d8ac082133332b4d7e800c63a2574036a;K8S_POD_UID=c50317d3-f7cd-4133-845e-44add57ac378" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-88d48b57d-9fjtd] networking: Multus: [openshift-machine-api/machine-api-operator-88d48b57d-9fjtd/c50317d3-f7cd-4133-845e-44add57ac378]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-88d48b57d-9fjtd in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-88d48b57d-9fjtd in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-88d48b57d-9fjtd?timeout=1m0s": context deadline exceeded ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x7) | openshift-machine-api |
multus |
machine-api-operator-88d48b57d-9fjtd |
AddedInterface |
Add eth0 [10.128.0.66/23] from ovn-kubernetes |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-marketplace |
kubelet |
certified-operators-8qs8v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine | |
openshift-machine-api |
kubelet |
machine-api-operator-88d48b57d-9fjtd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-rxhpq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-insights |
openshift-insights-operator |
insights-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
kubelet |
machine-api-operator-88d48b57d-9fjtd |
Started |
Started container kube-rbac-proxy | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
| (x4) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
BackOff |
Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(8b47694fcc32464ab24d09c23d6efb57) |
openshift-marketplace |
kubelet |
community-operators-rxhpq |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-rxhpq |
Created |
Created container: extract-utilities | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-675f5c767c-mtdrq |
Started |
Started container packageserver | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-675f5c767c-mtdrq |
Created |
Created container: packageserver | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/master-user-data-managed -n openshift-machine-api because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-marketplace |
kubelet |
certified-operators-8qs8v |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 678ms (678ms including waiting). Image size: 1205106509 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-8qs8v |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-8qs8v |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-8qs8v |
Created |
Created container: extract-utilities | |
| (x5) | openshift-operator-lifecycle-manager |
multus |
packageserver-675f5c767c-mtdrq |
AddedInterface |
Add eth0 [10.128.0.64/23] from ovn-kubernetes |
openshift-machine-api |
kubelet |
machine-api-operator-88d48b57d-9fjtd |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
openshift-machine-api |
kubelet |
machine-api-operator-88d48b57d-9fjtd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c2431a990bcddde98829abda81950247021a2ebbabc964b1516ea046b5f1d4e" | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-6686654b8d-rrndk became leader | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-675f5c767c-mtdrq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
| (x5) | openshift-route-controller-manager |
multus |
route-controller-manager-95cb5f987-46bsk |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes |
openshift-marketplace |
kubelet |
community-operators-rxhpq |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 506ms (506ms including waiting). Image size: 1201434959 bytes. | |
openshift-marketplace |
kubelet |
community-operators-rxhpq |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-rxhpq |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-8qs8v |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-rxhpq |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-8qs8v |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-rxhpq |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-8qs8v |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
certified-operators-8qs8v |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 372ms (372ms including waiting). Image size: 912722556 bytes. | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-675f5c767c-mtdrq |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.64:5443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openshift-marketplace |
kubelet |
community-operators-rxhpq |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-rxhpq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 383ms (383ms including waiting). Image size: 912722556 bytes. | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-675f5c767c-mtdrq |
ProbeError |
Readiness probe error: Get "https://10.128.0.64:5443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body: | |
openshift-marketplace |
kubelet |
certified-operators-8qs8v |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-rxhpq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
certified-operators-8qs8v |
Started |
Started container registry-server | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-95cb5f987-46bsk_2d76dab9-2560-4395-8d5d-0349ab87e4e6 became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-6bc8656fdc-2q7f5_64d59124-72ee-4550-b88a-1dcdcaf7fecb became leader | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-dcf7fc84b-fncfd_ed78c302-94a0-45d1-8248-b825c39704f7 became leader | |
openshift-machine-api |
cluster-autoscaler-operator-5f49d774cd-894dk_fa2f1dbf-616d-4206-9757-86f041e9b76a |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-5f49d774cd-894dk_fa2f1dbf-616d-4206-9757-86f041e9b76a became leader | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-675f5c767c-mtdrq |
ProbeError |
Readiness probe error: Get "https://10.128.0.64:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x2) | openshift-operator-lifecycle-manager |
kubelet |
packageserver-675f5c767c-mtdrq |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.64:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-marketplace |
kubelet |
community-operators-rxhpq |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
kubelet |
machine-api-operator-88d48b57d-9fjtd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c2431a990bcddde98829abda81950247021a2ebbabc964b1516ea046b5f1d4e" in 6.935s (6.935s including waiting). Image size: 856659740 bytes. | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigDaemonFailed |
Unable to apply 4.18.29: failed to apply machine config daemon manifests: Internal error occurred: admission plugin "authorization.openshift.io/RestrictSubjectBindings" failed to complete validation in 13s | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default: Internal error occurred: admission plugin "authorization.openshift.io/RestrictSubjectBindings" failed to complete validation in 13s | |
openshift-marketplace |
kubelet |
redhat-marketplace-krbbd |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-svhl4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-svhl4 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-svhl4 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
multus |
redhat-marketplace-svhl4 |
AddedInterface |
Add eth0 [10.128.0.71/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-svhl4 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-network-node-identity |
master-0_8629426e-02d9-4bef-8c14-c2d3682acfa0 |
ovnkube-identity |
LeaderElection |
master-0_8629426e-02d9-4bef-8c14-c2d3682acfa0 became leader | |
openshift-marketplace |
kubelet |
redhat-marketplace-svhl4 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 1.089s (1.089s including waiting). Image size: 1129027903 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-svhl4 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-svhl4 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-svhl4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-marketplace-svhl4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 1.458s (1.458s including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-svhl4 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-svhl4 |
Created |
Created container: registry-server | |
| (x4) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
openshift-kube-controller-manager |
static-pod-installer |
installer-2-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 2 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded set to False ("All is well"),Progressing set to False ("All is well"),Available set to True ("DefaultStorageClassControllerAvailable: No default StorageClass for this platform"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"}],status.versions changed from [] to [{"operator" "4.18.29"}] | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler | |
| (x2) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config started a version change from [] to [{operator 4.18.29} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b472823604757237c2d16bd6f6221f4cf562aa3b05942c7f602e1e8b2e55a7c6}] |
| (x3) | openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorVersionChanged |
clusteroperator/storage version "operator" changed from "" to "4.18.29" |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine | |
openshift-kube-scheduler |
static-pod-installer |
installer-4-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container wait-for-host-port | |
default |
machineapioperator |
machine-api |
Status upgrade |
Progressing towards operator: 4.18.29 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: wait-for-host-port | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/mcn-guards because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/mcn-guards-binding because it was missing | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_ec219b58-1204-4c72-bc03-b4282ea9c15c became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" to "All is well" | |
openshift-machine-api |
cluster-baremetal-operator-78f758c7b9-zgkh5_5d2712d2-68de-475a-898b-7019244a4d5d |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-78f758c7b9-zgkh5_5d2712d2-68de-475a-898b-7019244a4d5d became leader | |
openshift-machine-api |
cluster-autoscaler-operator-5f49d774cd-894dk_a8f1f627-f8fd-488c-ace9-548ee911d8fb |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-5f49d774cd-894dk_a8f1f627-f8fd-488c-ace9-548ee911d8fb became leader | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-6b958b6f94-w74zr |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-6b958b6f94-w74zr became leader | |
| (x5) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6b958b6f94-w74zr |
Started |
Started container snapshot-controller |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d64c13fe7663a0b4ae61d103b1b7598adcf317a01826f296bcb66b1a2de83c96" already present on machine |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": read tcp 127.0.0.1:57002->127.0.0.1:10357: read: connection reset by peer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Container cluster-policy-controller failed startup probe, will be restarted | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://localhost:10357/healthz": read tcp 127.0.0.1:57002->127.0.0.1:10357: read: connection reset by peer body: | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x4) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
openshift-operator-controller |
operator-controller-controller-manager-7cbd59c7f8-qcz9t_42f1540d-7e08-48f7-a837-7e1b3ba92857 |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-7cbd59c7f8-qcz9t_42f1540d-7e08-48f7-a837-7e1b3ba92857 became leader | |
openshift-machine-api |
control-plane-machine-set-operator-7df95c79b5-7w5lm_cf49f13c-d4a7-4d5e-bf66-5cf43e01b901 |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-7df95c79b5-7w5lm_cf49f13c-d4a7-4d5e-bf66-5cf43e01b901 became leader | |
openshift-catalogd |
catalogd-controller-manager-7cc89f4c4c-fd9pv_cad58daa-8329-4e92-b24f-dce5557e509e |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-7cc89f4c4c-fd9pv_cad58daa-8329-4e92-b24f-dce5557e509e became leader | |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-8649c48786-cx2b2 |
Started |
Started container ingress-operator |
| (x3) | openshift-ingress-operator |
kubelet |
ingress-operator-8649c48786-cx2b2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:831f30660844091d6154e2674d3a9da6f34271bf8a2c40b56f7416066318742b" already present on machine |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-8649c48786-cx2b2 |
Created |
Created container: ingress-operator |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-operator-lifecycle-manager |
package-server-manager-67477646d4-7hndf_5a767bc3-9a75-405d-8427-31679e2dfc65 |
packageserver-controller-lock |
LeaderElection |
package-server-manager-67477646d4-7hndf_5a767bc3-9a75-405d-8427-31679e2dfc65 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-7bf7f6b755-sh6qf_ecbc45c3-1806-4514-afbe-3c6575ce7fbb became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
openshift-etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-5bf4d88c6f-2bpmr_4536fae1-6245-4260-b98d-d3f28be1193f became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
openshift-etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
openshift-etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded changed from False to True ("ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
openshift-etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
openshift-etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
openshift-etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
openshift-etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
openshift-etcd-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/etcd-endpoints has changed" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
openshift-etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.91eb892c5ee87610,data.MTkyLjE2OC4zMi4xMA | |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-77c99c46b8-6cntk_795cd857-b9e0-4456-adb7-c11c0eba9e4e became leader | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_e6d2d24d-98e3-4f24-8184-9dee2f86b8b6 became leader | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
openshift-etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 1 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-848f645654-7hmhg_69ca551c-5a70-48b5-b047-ad00c925bdb9 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
openshift-etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
openshift-kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
openshift-kube-controller-manager-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/kube-controller-manager-pod has changed,required configmap/serviceaccount-ca has changed" | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-85cff47f46-4gv5j_56a8d629-2733-4428-be44-97e06cc80f71 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-85cff47f46-4gv5j_56a8d629-2733-4428-be44-97e06cc80f71 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
openshift-etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
openshift-kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "operator" changed from "" to "4.18.29" |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
openshift-etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-all-bundles-2 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
openshift-kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: CrashLoopBackOff: back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(e6b437c60bb18680f4492b00b294e872)\nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
openshift-kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.18.29"}] to [{"raw-internal" "4.18.29"} {"kube-controller-manager" "1.31.13"} {"operator" "4.18.29"}] | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
openshift-kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.31.13" |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
openshift-etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
openshift-kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
openshift-kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
openshift-kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
openshift-kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_5b5f4e61-8577-4c39-b810-b785e09ab344 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-5f85974995-g4rwv_af2c1a91-1f47-4178-86e2-e82e6ecddf1f became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.18.29"}] to [{"raw-internal" "4.18.29"} {"kube-scheduler" "1.31.13"} {"operator" "4.18.29"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.31.13" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "operator" changed from "" to "4.18.29" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
openshift-kube-controller-manager-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/kube-controller-manager-pod has changed,required configmap/serviceaccount-ca has changed" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
openshift-etcd-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 1 to 2 because node master-0 with revision 1 is the oldest | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 4 because static pod is ready | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4") | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" architecture="amd64" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
openshift-etcd-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-etcd because it was missing | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-etcd |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.72/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f4d4282cb53325e737ad68abbfcb70687ae04fb50353f4f0ba0ba5703b15009a" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
openshift-kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
openshift-kube-controller-manager-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.73/23] from ovn-kubernetes | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-77758bc754-8smqn_9ec5d040-bd77-4f4b-8d8b-25046a5ae285 became leader | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-6fb9f88b7-tgvfl_fb72665d-b1de-4acb-9306-ef17501cb970 became leader | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available changed from True to False ("Available: no pods available on any node.") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-6c8676f99d-7z948_8b885916-2eb8-4873-93e8-1f63051ec480 became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x13) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
openshift-kube-controller-manager-operator |
SATokenSignerControllerStuck |
unexpected addresses: 192.168.32.10 |
openshift-etcd |
kubelet |
etcd-master-0 |
Killing |
Stopping container etcdctl | |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-f797b99b6-hjjrk |
Started |
Started container marketplace-operator |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-f797b99b6-hjjrk |
Created |
Created container: marketplace-operator |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-f797b99b6-hjjrk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7664a2d4cb10e82ed32abbf95799f43fc3d10135d7dd94799730de504a89680a" already present on machine |
| (x3) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-7cbd59c7f8-qcz9t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f952cec1e5332b84bdffa249cd426f39087058d6544ddcec650a414c15a9b68" already present on machine |
| (x3) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-7cbd59c7f8-qcz9t |
Created |
Created container: manager |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine | |
| (x3) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-7cbd59c7f8-qcz9t |
Started |
Started container manager |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-ensure-env-vars | |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-74f484689c-jmfn2 |
Created |
Created container: config-sync-controllers |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-74f484689c-jmfn2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd38b8be3af889b0f97e2df41517c89a11260901432a9a1ee943195bb3a22737" already present on machine |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-ensure-env-vars | |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-74f484689c-jmfn2 |
Started |
Started container config-sync-controllers |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-74f484689c-jmfn2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd38b8be3af889b0f97e2df41517c89a11260901432a9a1ee943195bb3a22737" already present on machine | |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-74f484689c-jmfn2 |
Started |
Started container cluster-cloud-controller-manager |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-74f484689c-jmfn2 |
Created |
Created container: cluster-cloud-controller-manager |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-7cc89f4c4c-fd9pv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f0aa9cd04713acc5c6fea721bd849e1500da8ae945e0b32000887f34d786e0b" already present on machine |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-7cc89f4c4c-fd9pv |
Started |
Started container manager |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-7cc89f4c4c-fd9pv |
Created |
Created container: manager |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-resources-copy | |
| (x2) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7df95c79b5-7w5lm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd3e9f8f00a59bda7483ec7dc8a0ed602f9ca30e3d72b22072dbdf2819da3f61" already present on machine |
| (x3) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7df95c79b5-7w5lm |
Created |
Created container: control-plane-machine-set-operator |
| (x3) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7df95c79b5-7w5lm |
Started |
Started container control-plane-machine-set-operator |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-operator-dc5d7666f-p2cmn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b472823604757237c2d16bd6f6221f4cf562aa3b05942c7f602e1e8b2e55a7c6" already present on machine |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5548d54-jjfhq |
Started |
Started container ovnkube-cluster-manager |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5548d54-jjfhq |
Created |
Created container: ovnkube-cluster-manager |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-5df5548d54-jjfhq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-operator-dc5d7666f-p2cmn |
Created |
Created container: machine-config-operator |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-operator-dc5d7666f-p2cmn |
Started |
Started container machine-config-operator |
| (x2) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-67477646d4-7hndf |
BackOff |
Back-off restarting failed container package-server-manager in pod package-server-manager-67477646d4-7hndf_openshift-operator-lifecycle-manager(72faf6d6-e8ca-43d1-b93e-67c11f8d3b46) |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-6686654b8d-rrndk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eddedae7578d79b5a3f748000ae5c00b9f14a04710f9f9ec7b52fc569be5dfb8" already present on machine |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-6686654b8d-rrndk |
Created |
Created container: controller-manager |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-6686654b8d-rrndk |
Started |
Started container controller-manager |
| (x2) | openshift-cluster-machine-approver |
kubelet |
machine-approver-f797d8546-qvgbq |
Created |
Created container: machine-approver-controller |
| (x2) | openshift-cluster-machine-approver |
kubelet |
machine-approver-f797d8546-qvgbq |
Started |
Started container machine-approver-controller |
openshift-cluster-machine-approver |
kubelet |
machine-approver-f797d8546-qvgbq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8cc27777e72233024fe84ee1faa168aec715a0b24912a3ce70715ddccba328df" already present on machine | |
| (x2) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-67477646d4-7hndf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine |
| (x3) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-67477646d4-7hndf |
Created |
Created container: package-server-manager |
| (x10) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6b958b6f94-w74zr |
BackOff |
Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-6b958b6f94-w74zr_openshift-cluster-storage-operator(38fc8086-a00c-4a2a-8a0e-c57e9d9d0103) |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-78f758c7b9-zgkh5 |
BackOff |
Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-78f758c7b9-zgkh5_openshift-machine-api(7e3160a9-11d1-4845-ba30-1a49ae7339a9) |
| (x6) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6b958b6f94-w74zr |
Created |
Created container: snapshot-controller |
| (x5) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6b958b6f94-w74zr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3ce2cbf1032ad0f24f204db73687002fcf302e86ebde3945801c74351b64576" already present on machine |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-78f758c7b9-zgkh5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a92c310ce30dcb3de85d6aac868e0d80919670fa29ef83d55edd96b0cae35563" already present on machine |
| (x5) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-78f758c7b9-zgkh5 |
Started |
Started container cluster-baremetal-operator |
| (x5) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-78f758c7b9-zgkh5 |
Created |
Created container: cluster-baremetal-operator |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
ProbeError |
Readiness probe error: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Unhealthy |
Readiness probe failed: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Unhealthy |
Liveness probe failed: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
ProbeError |
Liveness probe error: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f4d4282cb53325e737ad68abbfcb70687ae04fb50353f4f0ba0ba5703b15009a" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f4d4282cb53325e737ad68abbfcb70687ae04fb50353f4f0ba0ba5703b15009a" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-metrics | |
| (x3) | openshift-etcd-operator |
kubelet |
etcd-operator-5bf4d88c6f-2bpmr |
Started |
Started container etcd-operator |
| (x3) | openshift-etcd-operator |
kubelet |
etcd-operator-5bf4d88c6f-2bpmr |
Created |
Created container: etcd-operator |
| (x3) | openshift-etcd-operator |
kubelet |
etcd-operator-5bf4d88c6f-2bpmr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f4d4282cb53325e737ad68abbfcb70687ae04fb50353f4f0ba0ba5703b15009a" already present on machine |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x3) | openshift-service-ca |
kubelet |
service-ca-77c99c46b8-6cntk |
Started |
Started container service-ca-controller |
| (x3) | openshift-service-ca |
kubelet |
service-ca-77c99c46b8-6cntk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8139ed65c0a0a4b0f253b715c11cc52be027efe8a4774da9ccce35c78ef439da" already present on machine |
| (x3) | openshift-service-ca |
kubelet |
service-ca-77c99c46b8-6cntk |
Created |
Created container: service-ca-controller |
| (x4) | openshift-network-operator |
kubelet |
network-operator-79767b7ff9-5bgzx |
Started |
Started container network-operator |
| (x4) | openshift-network-operator |
kubelet |
network-operator-79767b7ff9-5bgzx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9724d2036305cbd729e1f484c5bad89971de977fff8a6723fef1873858dd1123" already present on machine |
| (x4) | openshift-network-operator |
kubelet |
network-operator-79767b7ff9-5bgzx |
Created |
Created container: network-operator |
| (x3) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-848f645654-7hmhg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine |
| (x3) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-848f645654-7hmhg |
Started |
Started container kube-controller-manager-operator |
| (x3) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-848f645654-7hmhg |
Created |
Created container: kube-controller-manager-operator |
| (x35) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
BackOff |
Back-off restarting failed container cluster-policy-controller in pod kube-controller-manager-master-0_openshift-kube-controller-manager(e6b437c60bb18680f4492b00b294e872) |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-6686654b8d-rrndk became leader | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-b9c5dfc78-dcxkw_ada5cebb-e859-4564-b956-32d6373530fe became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_5b5f4e61-8577-4c39-b810-b785e09ab344 stopped leading | |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-5df5548d54-jjfhq became leader | |
openshift-cluster-olm-operator |
cluster-olm-operator |
cluster-olm-operator-lock |
LeaderElection |
cluster-olm-operator-56fcb6cc5f-4xwp2_a440be7a-c6d0-421f-b829-47961fb20652 became leader | |
| (x4) | openshift-service-ca-operator |
kubelet |
service-ca-operator-77758bc754-8smqn |
Created |
Created container: service-ca-operator |
| (x4) | openshift-service-ca-operator |
kubelet |
service-ca-operator-77758bc754-8smqn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8139ed65c0a0a4b0f253b715c11cc52be027efe8a4774da9ccce35c78ef439da" already present on machine |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
Started |
Started container route-controller-manager |
| (x4) | openshift-service-ca-operator |
kubelet |
service-ca-operator-77758bc754-8smqn |
Started |
Started container service-ca-operator |
| (x4) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-dcf7fc84b-fncfd |
Started |
Started container cluster-storage-operator |
| (x3) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-dcf7fc84b-fncfd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97d26892192b552c16527bf2771e1b86528ab581a02dd9279cdf71c194830e3e" already present on machine |
| (x3) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-85cff47f46-4gv5j |
Started |
Started container cluster-node-tuning-operator |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-6bc8656fdc-2q7f5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10e57ca7611f79710f05777dc6a8f31c7e04eb09da4d8d793a5acfbf0e4692d7" already present on machine |
| (x4) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-6c8676f99d-7z948 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8eabac819f289e29d75c7ab172d8124554849a47f0b00770928c3eb19a5a31c4" already present on machine |
| (x4) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-6c8676f99d-7z948 |
Created |
Created container: openshift-controller-manager-operator |
| (x3) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-85cff47f46-4gv5j |
Created |
Created container: cluster-node-tuning-operator |
| (x4) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f85974995-g4rwv |
Started |
Started container kube-scheduler-operator-container |
| (x4) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f85974995-g4rwv |
Created |
Created container: kube-scheduler-operator-container |
| (x4) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-dcf7fc84b-fncfd |
Created |
Created container: cluster-storage-operator |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
Created |
Created container: route-controller-manager |
| (x4) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-5f85974995-g4rwv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine |
| (x4) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-6c8676f99d-7z948 |
Started |
Started container openshift-controller-manager-operator |
| (x2) | openshift-machine-api |
kubelet |
machine-api-operator-88d48b57d-9fjtd |
Created |
Created container: machine-api-operator |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-85cff47f46-4gv5j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5451aa441e5b8d8689c032405d410c8049a849ef2edf77e5b6a5ce2838c6569b" already present on machine |
| (x2) | openshift-machine-api |
kubelet |
machine-api-operator-88d48b57d-9fjtd |
Started |
Started container machine-api-operator |
openshift-machine-api |
kubelet |
machine-api-operator-88d48b57d-9fjtd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c2431a990bcddde98829abda81950247021a2ebbabc964b1516ea046b5f1d4e" already present on machine | |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-6bc8656fdc-2q7f5 |
Started |
Started container csi-snapshot-controller-operator |
| (x3) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-6fb9f88b7-tgvfl |
Created |
Created container: cluster-image-registry-operator |
| (x2) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-6fb9f88b7-tgvfl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa24edce3d740f84c40018e94cdbf2bc7375268d13d57c2d664e43a46ccea3fc" already present on machine |
| (x3) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-6fb9f88b7-tgvfl |
Started |
Started container cluster-image-registry-operator |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-6bc8656fdc-2q7f5 |
Created |
Created container: csi-snapshot-controller-operator |
| (x3) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-5f49d774cd-894dk |
BackOff |
Back-off restarting failed container cluster-autoscaler-operator in pod cluster-autoscaler-operator-5f49d774cd-894dk_openshift-machine-api(e7fc7c16-5bca-49e5-aff0-7a8f80c6b639) |
openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
ProbeError |
Readiness probe error: Get "https://10.128.0.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body: | |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer |
| (x4) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-5f49d774cd-894dk |
Created |
Created container: cluster-autoscaler-operator |
| (x4) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-5f49d774cd-894dk |
Started |
Started container cluster-autoscaler-operator |
| (x3) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-5f49d774cd-894dk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72bbe2c638872937108f647950ab8ad35c0428ca8ecc6a39a8314aace7d95078" already present on machine |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-85cff47f46-4gv5j_2965421b-a54c-4350-9046-3660f49583a6 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-85cff47f46-4gv5j_2965421b-a54c-4350-9046-3660f49583a6 became leader | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-6fb9f88b7-tgvfl_0434264d-558a-4723-99f2-056d66af3bf9 became leader | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: " to "All is well" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services api)\nAPIServerStaticResourcesDegraded: " | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
ProbeError |
Liveness probe error: Get "https://10.128.0.70:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
ProbeError |
Readiness probe error: Get "https://10.128.0.70:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.70:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.70:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x3) | openshift-cluster-version |
kubelet |
cluster-version-operator-6d5d5dcc89-cw2hx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" already present on machine |
| (x3) | openshift-cluster-version |
kubelet |
cluster-version-operator-6d5d5dcc89-cw2hx |
Created |
Created container: cluster-version-operator |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-5f85974995-g4rwv_b4604024-c363-4112-b342-f91bc36ae8d1 became leader | |
| (x3) | openshift-cluster-version |
kubelet |
cluster-version-operator-6d5d5dcc89-cw2hx |
Started |
Started container cluster-version-operator |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-6bc8656fdc-2q7f5_bea46ce6-ce19-4d80-9e7b-f108906f2ce2 became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-6c8676f99d-7z948_8ba4cf9c-e623-440c-8949-fbff56d5d895 became leader | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationUpdated |
Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-77758bc754-8smqn_1a0307b5-6449-4e10-892a-4c03488e906d became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_c465a515-4df4-43c3-8d97-3c391bd274ae became leader | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-dcf7fc84b-fncfd_f6f53fb2-f234-44c7-8a6f-f31c3537d1e8 became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" to "All is well" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" architecture="amd64" | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-6c968fdfdf-nrrfw_ed095787-ae9c-47b2-8f37-9613c2bc84fa became leader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory") | |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
BackOff |
Back-off restarting failed container route-controller-manager in pod route-controller-manager-95cb5f987-46bsk_openshift-route-controller-manager(e4d7939a-5961-4608-b910-73e71aa55bf6) |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller |
authentication-operator |
SecretCreated |
Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c416b201d480bddb5a4960ec42f4740761a1335001cf84ba5ae19ad6857771b1" already present on machine |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-95cb5f987-46bsk_d647a0da-6f37-43a4-80dd-2a0a58f4cd2c became leader | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-68758cbcdb-zh8g5_74c26017-16e4-40a4-b6bf-ae7daef8127c became leader | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-network-node-identity |
master-0_a29fc5f9-86ae-4529-a885-4e03fd926ca7 |
ovnkube-identity |
LeaderElection |
master-0_a29fc5f9-86ae-4529-a885-4e03fd926ca7 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-765d9ff747-gr68k_425735e5-5651-4e99-91ec-906c4b799d51 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
openshift-kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-6b958b6f94-w74zr |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-6b958b6f94-w74zr became leader | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
openshift-kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}},   "apiServerArguments": map[string]any{   "api-audiences": []any{string("https://kubernetes.default.svc")}, + "authentication-token-webhook-config-file": []any{ + string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig"), + }, + "authentication-token-webhook-version": []any{string("v1")},   "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")},   "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...},   ... // 6 identical entries   },   "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")},   "gracefulTerminationDuration": string("15"),   ... // 2 identical entries   } |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
openshift-kube-apiserver-operator |
ObserveWebhookTokenAuthenticator |
authentication-token webhook configuration status changed from false to true |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
StartingNewRevision |
new revision 2 triggered by "optional secret/webhook-authenticator has been created" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
openshift-kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 11:38:33.899459 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 11:38:33.928330 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 11:38:33.928413 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 11:38:33.928428 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 11:38:33.933580 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1204 11:39:03.933887 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1204 11:39:17.938070 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
openshift-kube-apiserver-operator |
InstallerPodFailed |
installer errors: installer: ving-cert", (string) (len=21) "user-serving-cert-000", (string) (len=21) "user-serving-cert-001", (string) (len=21) "user-serving-cert-002", (string) (len=21) "user-serving-cert-003", (string) (len=21) "user-serving-cert-004", (string) (len=21) "user-serving-cert-005", (string) (len=21) "user-serving-cert-006", (string) (len=21) "user-serving-cert-007", (string) (len=21) "user-serving-cert-008", (string) (len=21) "user-serving-cert-009" }, CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca", (string) (len=29) "control-plane-node-kubeconfig", (string) (len=26) "check-endpoints-kubeconfig" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I1204 11:38:33.899459 1 cmd.go:413] Getting controller reference for node master-0 I1204 11:38:33.928330 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I1204 11:38:33.928413 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I1204 11:38:33.928428 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I1204 11:38:33.933580 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I1204 11:39:03.933887 1 cmd.go:524] Getting installer pods for node master-0 F1204 11:39:17.938070 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
| (x14) | openshift-ingress-operator |
kubelet |
ingress-operator-8649c48786-cx2b2 |
BackOff |
Back-off restarting failed container ingress-operator in pod ingress-operator-8649c48786-cx2b2_openshift-ingress-operator(b011b1f1-3235-4e20-825b-ce711c052407) |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
RevisionTriggered |
new revision 2 triggered by "optional secret/webhook-authenticator has been created" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/config has changed" | |
openshift-kube-apiserver |
kubelet |
installer-1-retry-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-kube-apiserver |
multus |
installer-1-retry-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.74/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
openshift-kube-apiserver-operator |
PodCreated |
Created Pod/installer-1-retry-1-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-1-retry-1-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-1-retry-1-master-0 |
Started |
Started container installer | |
openshift-cloud-controller-manager-operator |
master-0_1eacd090-717d-4028-b67c-2ccc9f96a6b5 |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_1eacd090-717d-4028-b67c-2ccc9f96a6b5 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-1-retry-1-master-0 |
Killing |
Stopping container installer | |
openshift-operator-controller |
operator-controller-controller-manager-7cbd59c7f8-qcz9t_7d47418a-770e-4e89-92e6-541238a7be53 |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-7cbd59c7f8-qcz9t_7d47418a-770e-4e89-92e6-541238a7be53 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
openshift-kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 11:38:33.899459 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 11:38:33.928330 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 11:38:33.928413 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 11:38:33.928428 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 11:38:33.933580 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1204 11:39:03.933887 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1204 11:39:17.938070 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
openshift-kube-apiserver-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.75/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-machine-api |
control-plane-machine-set-operator-7df95c79b5-7w5lm_6643d653-940b-487b-905c-133114533395 |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-7df95c79b5-7w5lm_6643d653-940b-487b-905c-133114533395 became leader | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing | |
openshift-cloud-controller-manager-operator |
master-0_3f3bfafd-68c3-41e2-92a3-561a6de08c34 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_3f3bfafd-68c3-41e2-92a3-561a6de08c34 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing | |
openshift-operator-lifecycle-manager |
package-server-manager-67477646d4-7hndf_efa69d16-9250-446b-9dbd-9d804226ad9f |
packageserver-controller-lock |
LeaderElection |
package-server-manager-67477646d4-7hndf_efa69d16-9250-446b-9dbd-9d804226ad9f became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/config has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
openshift-kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Killing |
Stopping container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
openshift-kube-apiserver-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-catalogd |
catalogd-controller-manager-7cc89f4c4c-fd9pv_4ade11d5-e5ea-47a7-8a3b-e09c71c0f8f3 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-7cc89f4c4c-fd9pv_4ade11d5-e5ea-47a7-8a3b-e09c71c0f8f3 became leader | |
openshift-kube-apiserver |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.76/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-cluster-machine-approver |
master-0_5dec8575-8a2b-456a-9ad0-0baecf62134c |
cluster-machine-approver-leader |
LeaderElection |
master-0_5dec8575-8a2b-456a-9ad0-0baecf62134c became leader | |
openshift-machine-api |
cluster-baremetal-operator-78f758c7b9-zgkh5_740825e5-a824-4483-a2ae-5efff8f920c0 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-78f758c7b9-zgkh5_740825e5-a824-4483-a2ae-5efff8f920c0 became leader | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_8ff66129-d73b-4267-abd6-ffe1e7286622 became leader | |
openshift-marketplace |
multus |
redhat-operators-v4tlj |
AddedInterface |
Add eth0 [10.128.0.77/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
certified-operators-dr5sc |
AddedInterface |
Add eth0 [10.128.0.78/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
community-operators-fvf4r |
AddedInterface |
Add eth0 [10.128.0.80/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
redhat-marketplace-xsdsw |
AddedInterface |
Add eth0 [10.128.0.79/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-xsdsw |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-v4tlj |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-dr5sc |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-dr5sc |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-dr5sc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-fvf4r |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-v4tlj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-fvf4r |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-xsdsw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-xsdsw |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-fvf4r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-v4tlj |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-v4tlj |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-fvf4r |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-xsdsw |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-dr5sc |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-v4tlj |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 1.462s (1.462s including waiting). Image size: 1610175307 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-xsdsw |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 1.439s (1.439s including waiting). Image size: 1129027903 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-v4tlj |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-dr5sc |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 1.617s (1.617s including waiting). Image size: 1205106509 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-v4tlj |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-dr5sc |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-dr5sc |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-fvf4r |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 1.568s (1.568s including waiting). Image size: 1201434959 bytes. | |
openshift-marketplace |
kubelet |
community-operators-fvf4r |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-fvf4r |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-xsdsw |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-xsdsw |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-xsdsw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 392ms (392ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-xsdsw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-marketplace-xsdsw |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
certified-operators-dr5sc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-marketplace-xsdsw |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-fvf4r |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
community-operators-fvf4r |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-fvf4r |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 849ms (849ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-v4tlj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 441ms (441ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-v4tlj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
certified-operators-dr5sc |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
certified-operators-dr5sc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 941ms (941ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-dr5sc |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-fvf4r |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-v4tlj |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-v4tlj |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-v4tlj |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
openshift-marketplace |
kubelet |
redhat-marketplace-xsdsw |
Killing |
Stopping container registry-server | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
kubelet |
redhat-operators-v4tlj |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-dr5sc |
Killing |
Stopping container registry-server | |
openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-marketplace |
kubelet |
community-operators-fvf4r |
Killing |
Stopping container registry-server | |
openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
default |
apiserver |
openshift-kube-apiserver |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
default |
apiserver |
openshift-kube-apiserver |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
default |
apiserver |
openshift-kube-apiserver |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
static-pod-installer |
installer-3-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 3 | |
default |
apiserver |
openshift-kube-apiserver |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
openshift-kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.18.29"}] to [{"raw-internal" "4.18.29"} {"operator" "4.18.29"} {"kube-apiserver" "1.31.13"}] | |
default |
apiserver |
openshift-kube-apiserver |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
openshift-kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.31.13" |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
openshift-kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "operator" changed from "" to "4.18.29" |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
default |
apiserver |
openshift-kube-apiserver |
TerminationGracefulTerminationFinished |
All pending requests processed | |
| (x7) | default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID |
| (x8) | default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine | |
| (x8) | default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_6cef9669-7c25-4988-8c33-72ef3c6d6a13 became leader | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-78f758c7b9-zgkh5 |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-88d48b57d-9fjtd |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-74f484689c-jmfn2 |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-5f49d774cd-894dk |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-5f49d774cd-894dk |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-74f484689c-jmfn2 |
FailedMount |
MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-insights |
kubelet |
insights-operator-55965856b6-skbmb |
FailedMount |
MountVolume.SetUp failed for volume "service-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-74f484689c-jmfn2 |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-78f758c7b9-zgkh5 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-78f758c7b9-zgkh5 |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-6686654b8d-rrndk |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-78f758c7b9-zgkh5 |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-6686654b8d-rrndk |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-6686654b8d-rrndk |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-88d48b57d-9fjtd |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-88d48b57d-9fjtd |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-6686654b8d-rrndk |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-dc5d7666f-p2cmn |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-dc5d7666f-p2cmn |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-dc5d7666f-p2cmn |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-insights |
kubelet |
insights-operator-55965856b6-skbmb |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-insights |
kubelet |
insights-operator-55965856b6-skbmb |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-698c598cfc-95jdn |
FailedMount |
MountVolume.SetUp failed for volume "cco-trusted-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered | |
openshift-ingress-operator |
kubelet |
ingress-operator-8649c48786-cx2b2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:831f30660844091d6154e2674d3a9da6f34271bf8a2c40b56f7416066318742b" already present on machine | |
openshift-ingress-operator |
kubelet |
ingress-operator-8649c48786-cx2b2 |
Created |
Created container: ingress-operator | |
openshift-ingress-operator |
kubelet |
ingress-operator-8649c48786-cx2b2 |
Started |
Started container ingress-operator | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found",Progressing changed from False to True (""),Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]\nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]\nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused" to "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]\nCSISnapshotControllerDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]\nCSISnapshotControllerDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused" to "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
openshift-kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3") | |
| (x5) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d64c13fe7663a0b4ae61d103b1b7598adcf317a01826f296bcb66b1a2de83c96" already present on machine |
| (x5) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Container cluster-policy-controller failed startup probe, will be restarted |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body: |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]" to "All is well" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3." | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3." to "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-95cb5f987-46bsk_397195f7-0562-4c91-89d0-2f6233218cb1 became leader | |
| (x5) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
NeedsReinstall |
apiServices not installed |
| (x6) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallWaiting |
apiServices not installed |
openshift-machine-api |
cluster-autoscaler-operator-5f49d774cd-894dk_c3fbfc90-01f9-4f6a-987c-f23c118ead3b |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-5f49d774cd-894dk_c3fbfc90-01f9-4f6a-987c-f23c118ead3b became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-5bf4d88c6f-2bpmr_f7924183-51fb-4773-b618-3e2fa52020b8 became leader | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 2\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 2\nEtcdMembersAvailable: 1 members are available" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2\nEtcdMembersAvailable: 1 members are available" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 1 to 2 because static pod is ready | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapUpdated |
Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_52c2a6b8-d268-4723-953e-dddc8e60871b became leader | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-77c99c46b8-6cntk_ecc45190-8feb-4b83-b5b2-b9a8b098f236 became leader | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
DaemonSetCreated |
Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:31aa3c7464"...)}},   "controllers": []any{   ... // 8 identical elements   string("openshift.io/deploymentconfig"),   string("openshift.io/image-import"),   strings.Join({ - "-",   "openshift.io/image-puller-rolebindings",   }, ""),   string("openshift.io/image-signature-import"),   string("openshift.io/image-trigger"),   ... // 2 identical elements   string("openshift.io/origin-namespace"),   string("openshift.io/serviceaccount"),   strings.Join({ - "-",   "openshift.io/serviceaccount-pull-secrets",   }, ""),   string("openshift.io/templateinstance"),   string("openshift.io/templateinstancefinalizer"),   string("openshift.io/unidling"),   },   "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:42c3f5030d"...)}},   "featureGates": []any{string("BuildCSIVolumes=true")},   "ingress": map[string]any{"ingressIPNetworkCIDR": string("")},   } | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 7.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 7.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4." | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 7.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 7.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4." to "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 7.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 7.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("SATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-848f645654-7hmhg_9674dd2f-a3ae-47b8-a185-067c7a10704f became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "SATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)" to "SATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: CrashLoopBackOff: back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(e6b437c60bb18680f4492b00b294e872)" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
installer errors: installer: icy-controller-config", (string) (len=29) "controller-manager-kubeconfig", (string) (len=38) "kube-controller-cert-syncer-kubeconfig", (string) (len=17) "serviceaccount-ca", (string) (len=10) "service-ca", (string) (len=15) "recycler-config" }, OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=12) "cloud-config" }, CertSecretNames: ([]string) (len=2 cap=2) { (string) (len=39) "kube-controller-manager-client-cert-key", (string) (len=10) "csr-signer" }, OptionalCertSecretNamePrefixes: ([]string) <nil>, CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=66) "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I1204 11:50:26.401782 1 cmd.go:413] Getting controller reference for node master-0 I1204 11:50:26.499253 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I1204 11:50:26.499327 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I1204 11:50:26.499338 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I1204 11:50:26.501970 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I1204 11:50:56.502405 1 cmd.go:524] Getting installer pods for node master-0 F1204 11:51:10.506236 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerOK |
found expected kube-apiserver endpoints | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "SATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: CrashLoopBackOff: back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(e6b437c60bb18680f4492b00b294e872)" to "NodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 11:50:26.401782 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 11:50:26.499253 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 11:50:26.499327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 11:50:26.499338 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 11:50:26.501970 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1204 11:50:56.502405 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1204 11:51:10.506236 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nSATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: CrashLoopBackOff: back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(e6b437c60bb18680f4492b00b294e872)" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/sa-token-signing-certs has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 11:50:26.401782 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 11:50:26.499253 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 11:50:26.499327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 11:50:26.499338 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 11:50:26.501970 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1204 11:50:56.502405 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1204 11:51:10.506236 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nSATokenSignerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: CrashLoopBackOff: back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(e6b437c60bb18680f4492b00b294e872)" to "NodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 11:50:26.401782 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 11:50:26.499253 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 11:50:26.499327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 11:50:26.499338 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 11:50:26.501970 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1204 11:50:56.502405 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1204 11:51:10.506236 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: CrashLoopBackOff: back-off 1m20s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(e6b437c60bb18680f4492b00b294e872)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
multus |
installer-3-retry-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.81/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-3-retry-1-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-3-retry-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-3-retry-1-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-3-retry-1-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/sa-token-signing-certs has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
openshift-kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
openshift-kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
openshift-kube-apiserver-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.82/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 11:50:26.401782 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 11:50:26.499253 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 11:50:26.499327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 11:50:26.499338 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 11:50:26.501970 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1204 11:50:56.502405 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1204 11:51:10.506236 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: ") | |
openshift-kube-controller-manager |
static-pod-installer |
installer-3-retry-1-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 3 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d64c13fe7663a0b4ae61d103b1b7598adcf317a01826f296bcb66b1a2de83c96" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_3537ea3b-bb87-4a34-b047-72d8959cf913 became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
static-pod-installer |
installer-4-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
BackOff |
Back-off restarting failed container route-controller-manager in pod route-controller-manager-95cb5f987-46bsk_openshift-route-controller-manager(e4d7939a-5961-4608-b910-73e71aa55bf6) |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c416b201d480bddb5a4960ec42f4740761a1335001cf84ba5ae19ad6857771b1" already present on machine |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
Created |
Created container: route-controller-manager |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
Started |
Started container route-controller-manager |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check failed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
ProbeError |
Readiness probe error: Get "https://10.128.0.70:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body: | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-canary namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console-user-settings namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console namespace | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_9d846e24-1cea-46cb-ba8a-91e3625fcc31 became leader | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console-operator namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-console namespace | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_2f943774-ec8d-4f5c-97d6-9fc9fcdc9dca became leader | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled down replica set cluster-cloud-controller-manager-operator-74f484689c to 0 from 1 | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-g49xm | |
openshift-console-operator |
deployment-controller |
console-operator |
ScalingReplicaSet |
Scaled up replica set console-operator-54dbc87ccb to 1 | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled down replica set machine-approver-f797d8546 to 0 from 1 | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29414160 | |
openshift-network-console |
deployment-controller |
networking-console-plugin |
ScalingReplicaSet |
Scaled up replica set networking-console-plugin-7d45bf9455 to 1 | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29414160 |
SuccessfulCreate |
Created pod: collect-profiles-29414160-dmjlv | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-8jwk5 | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-77d4cb9fc to 1 | |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-network-console |
replicaset-controller |
networking-console-plugin-7d45bf9455 |
SuccessfulCreate |
Created pod: networking-console-plugin-7d45bf9455-w67z7 | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-74f484689c |
SuccessfulDelete |
Deleted pod: cluster-cloud-controller-manager-operator-74f484689c-jmfn2 | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-f797d8546 |
SuccessfulDelete |
Deleted pod: machine-approver-f797d8546-qvgbq | |
openshift-console-operator |
replicaset-controller |
console-operator-54dbc87ccb |
SuccessfulCreate |
Created pod: console-operator-54dbc87ccb-n8qgg | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-95cb5f987 to 0 from 1 | |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
EtcdEndpointsErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-multus |
replicaset-controller |
multus-admission-controller-77d4cb9fc |
SuccessfulCreate |
Created pod: multus-admission-controller-77d4cb9fc-5x5q5 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-95cb5f987 |
SuccessfulDelete |
Deleted pod: route-controller-manager-95cb5f987-46bsk | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-b644f86d6 to 1 from 0 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-557cff67c |
SuccessfulCreate |
Created pod: route-controller-manager-557cff67c-7qs6t | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6686654b8d |
SuccessfulDelete |
Deleted pod: controller-manager-6686654b8d-rrndk | |
openshift-controller-manager |
replicaset-controller |
controller-manager-b644f86d6 |
SuccessfulCreate |
Created pod: controller-manager-b644f86d6-mvlh8 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-557cff67c to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-6686654b8d to 0 from 1 | |
| (x4) | openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.70:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x4) | openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
ProbeError |
Readiness probe error: Get "https://10.128.0.70:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-95cb5f987-46bsk_1436ade7-22bb-4a6c-8b0d-0cea146090a8 became leader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"99996238-c997-4cd9-aef9-50e5ee960960\", ResourceVersion:\"14621\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 11, 30, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 11, 59, 22, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc001e912f0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-f797d8546-qvgbq |
Killing |
Stopping container machine-approver-controller | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-8jwk5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-image-registry |
kubelet |
node-ca-g49xm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ebe19b23694155a15d0968968fdee3dcf200ab9718ae1fcbd05f4d24960b827" | |
openshift-multus |
multus |
multus-admission-controller-77d4cb9fc-5x5q5 |
AddedInterface |
Add eth0 [10.128.0.84/23] from ovn-kubernetes | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-74f484689c-jmfn2 |
Killing |
Stopping container kube-rbac-proxy | |
openshift-controller-manager |
kubelet |
controller-manager-6686654b8d-rrndk |
Killing |
Stopping container controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-95cb5f987-46bsk |
Killing |
Stopping container route-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-74f484689c-jmfn2 |
Killing |
Stopping container config-sync-controllers | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-74f484689c-jmfn2 |
Killing |
Stopping container cluster-cloud-controller-manager | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-f797d8546-qvgbq |
Killing |
Stopping container kube-rbac-proxy | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-758cf9d97b |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-758cf9d97b-gcfhw | |
openshift-console-operator |
multus |
console-operator-54dbc87ccb-n8qgg |
AddedInterface |
Add eth0 [10.128.0.83/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-8jwk5 |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-8jwk5 |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-77d4cb9fc-5x5q5 |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-77d4cb9fc-5x5q5 |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-77d4cb9fc-5x5q5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-77d4cb9fc-5x5q5 |
Started |
Started container multus-admission-controller | |
openshift-console-operator |
kubelet |
console-operator-54dbc87ccb-n8qgg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0c3d16a01c2d60f9b536ca815ed8dc6abdca2b78e392551dc3fb79be537a354" | |
openshift-multus |
kubelet |
multus-admission-controller-77d4cb9fc-5x5q5 |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-77d4cb9fc-5x5q5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4ecc5bac651ff1942865baee5159582e9602c89b47eeab18400a32abcba8f690" already present on machine | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-758cf9d97b to 1 | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
replicaset-controller |
multus-admission-controller-7dfc5b745f |
SuccessfulDelete |
Deleted pod: multus-admission-controller-7dfc5b745f-258xq | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-74d9cbffbc-9c59x |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-74d9cbffbc-9c59x |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-74d9cbffbc-9c59x |
Created |
Created container: kube-rbac-proxy | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-758cf9d97b-gcfhw |
Started |
Started container cluster-cloud-controller-manager | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-74d9cbffbc |
SuccessfulCreate |
Created pod: machine-approver-74d9cbffbc-9c59x | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-74d9cbffbc to 1 | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-758cf9d97b-gcfhw |
Created |
Created container: cluster-cloud-controller-manager | |
openshift-image-registry |
kubelet |
node-ca-g49xm |
Created |
Created container: node-ca | |
openshift-image-registry |
kubelet |
node-ca-g49xm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ebe19b23694155a15d0968968fdee3dcf200ab9718ae1fcbd05f4d24960b827" in 2.123s (2.123s including waiting). Image size: 476100320 bytes. | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-7dfc5b745f to 0 from 1 | |
openshift-multus |
kubelet |
multus-admission-controller-7dfc5b745f-258xq |
Killing |
Stopping container kube-rbac-proxy | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-758cf9d97b-gcfhw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd38b8be3af889b0f97e2df41517c89a11260901432a9a1ee943195bb3a22737" already present on machine | |
openshift-image-registry |
kubelet |
node-ca-g49xm |
Started |
Started container node-ca | |
openshift-multus |
kubelet |
multus-admission-controller-7dfc5b745f-258xq |
Killing |
Stopping container multus-admission-controller | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-758cf9d97b-gcfhw |
Created |
Created container: config-sync-controllers | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-74d9cbffbc-9c59x |
Created |
Created container: machine-approver-controller | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-74d9cbffbc-9c59x |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8cc27777e72233024fe84ee1faa168aec715a0b24912a3ce70715ddccba328df" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-758cf9d97b-gcfhw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd38b8be3af889b0f97e2df41517c89a11260901432a9a1ee943195bb3a22737" already present on machine | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-74d9cbffbc-9c59x |
Started |
Started container machine-approver-controller | |
openshift-console-operator |
kubelet |
console-operator-54dbc87ccb-n8qgg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0c3d16a01c2d60f9b536ca815ed8dc6abdca2b78e392551dc3fb79be537a354" in 2.618s (2.619s including waiting). Image size: 506703191 bytes. | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-758cf9d97b-gcfhw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-758cf9d97b-gcfhw |
Started |
Started container config-sync-controllers | |
openshift-console-operator |
kubelet |
console-operator-54dbc87ccb-n8qgg |
Created |
Created container: console-operator | |
openshift-console-operator |
kubelet |
console-operator-54dbc87ccb-n8qgg |
Started |
Started container console-operator | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-758cf9d97b-gcfhw |
Created |
Created container: kube-rbac-proxy | |
openshift-console-operator |
console-operator |
console-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-machine-approver |
master-0_899dd820-437e-4e26-bd30-533efa64268d |
cluster-machine-approver-leader |
LeaderElection |
master-0_899dd820-437e-4e26-bd30-533efa64268d became leader | |
openshift-route-controller-manager |
multus |
route-controller-manager-557cff67c-7qs6t |
AddedInterface |
Add eth0 [10.128.0.85/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-557cff67c-7qs6t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c416b201d480bddb5a4960ec42f4740761a1335001cf84ba5ae19ad6857771b1" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-758cf9d97b-gcfhw |
Started |
Started container kube-rbac-proxy | |
openshift-console-operator |
console-operator |
console-operator-lock |
LeaderElection |
console-operator-54dbc87ccb-n8qgg_dc7d04f6-57b2-4bab-9e25-96ce41ece43e became leader | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-557cff67c-7qs6t |
Started |
Started container route-controller-manager | |
| (x2) | openshift-console |
controllermanager |
downloads |
NoPods |
No matching pods found |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" | |
openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentCreated |
Created Deployment.apps/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded set to False ("All is well"),Progressing set to False ("All is well"),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.18.29"}] | |
openshift-console-operator |
console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/console -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorVersionChanged |
clusteroperator/console version "operator" changed from "" to "4.18.29" | |
| (x2) | openshift-console |
controllermanager |
console |
NoPods |
No matching pods found |
openshift-console-operator |
console-operator-health-check-controller-healthcheckcontroller |
console-operator |
FastControllerResync |
Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found" to "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" | |
openshift-console |
replicaset-controller |
downloads-69cd4c69bf |
SuccessfulCreate |
Created pod: downloads-69cd4c69bf-wlssv | |
openshift-console |
deployment-controller |
downloads |
ScalingReplicaSet |
Scaled up replica set downloads-69cd4c69bf to 1 | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-557cff67c-7qs6t_e7c82bb4-78b1-4c51-9b4b-81c392dd9dab became leader | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-557cff67c-7qs6t |
Created |
Created container: route-controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-b644f86d6-mvlh8 |
Started |
Started container controller-manager | |
openshift-console |
kubelet |
downloads-69cd4c69bf-wlssv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:50e368e01772dd0dc9c4f9a6cdd5a9693a224968f75dc19eafe2a416f583bdab" | |
openshift-controller-manager |
kubelet |
controller-manager-b644f86d6-mvlh8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eddedae7578d79b5a3f748000ae5c00b9f14a04710f9f9ec7b52fc569be5dfb8" already present on machine | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/default-ingress-cert -n openshift-console because it was missing | |
openshift-controller-manager |
kubelet |
controller-manager-b644f86d6-mvlh8 |
Created |
Created container: controller-manager | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-b644f86d6-mvlh8 became leader | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/downloads -n openshift-console because it was missing | |
openshift-controller-manager |
multus |
controller-manager-b644f86d6-mvlh8 |
AddedInterface |
Add eth0 [10.128.0.87/23] from ovn-kubernetes | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-oauthclient-secret-controller-oauthclientsecretcontroller |
console-operator |
SecretCreated |
Created Secret/console-oauth-config -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/console -n openshift-console because it was missing | |
openshift-console |
multus |
downloads-69cd4c69bf-wlssv |
AddedInterface |
Add eth0 [10.128.0.86/23] from ovn-kubernetes | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" to "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console" to "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nConsoleDefaultRouteSyncDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console",Upgradeable changed from Unknown to False ("ConsoleDefaultRouteSyncUpgradeable: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console") | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nConsoleDefaultRouteSyncDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console" to "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nConsoleDefaultRouteSyncDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console",Upgradeable message changed from "ConsoleDefaultRouteSyncUpgradeable: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console" to "ConsoleDefaultRouteSyncUpgradeable: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nDownloadsDefaultRouteSyncUpgradeable: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nConsoleDefaultRouteSyncDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console" to "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nConsoleDefaultRouteSyncDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console\nRouteHealthDegraded: console route is not admitted",Available changed from Unknown to False ("RouteHealthAvailable: console route is not admitted") | |
| (x3) | openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/downloads -n openshift-console because it changed |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nConsoleDefaultRouteSyncDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console\nRouteHealthDegraded: console route is not admitted" to "SyncLoopRefreshDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nConsoleDefaultRouteSyncDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console\nRouteHealthDegraded: console route is not admitted" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-5 -n openshift-kube-apiserver because it was missing | |
openshift-console |
kubelet |
downloads-69cd4c69bf-wlssv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:50e368e01772dd0dc9c4f9a6cdd5a9693a224968f75dc19eafe2a416f583bdab" in 30.296s (30.296s including waiting). Image size: 2890347099 bytes. | |
openshift-console |
kubelet |
downloads-69cd4c69bf-wlssv |
Created |
Created container: download-server | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-5 -n openshift-kube-apiserver because it was missing | |
openshift-console |
kubelet |
downloads-69cd4c69bf-wlssv |
Started |
Started container download-server | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-console |
kubelet |
downloads-69cd4c69bf-wlssv |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.86:8080/": dial tcp 10.128.0.86:8080: connect: connection refused |
| (x2) | openshift-console |
kubelet |
downloads-69cd4c69bf-wlssv |
ProbeError |
Readiness probe error: Get "http://10.128.0.86:8080/": dial tcp 10.128.0.86:8080: connect: connection refused body: |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
RevisionTriggered |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 3 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: icy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 11:50:26.401782 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 11:50:26.499253 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 11:50:26.499327 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 11:50:26.499338 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 11:50:26.501970 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1204 11:50:56.502405 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1204 11:51:10.506236 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: ") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 4 triggered by "required secret/localhost-recovery-client-token has changed" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 4 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
openshift-kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"revision-status-5\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"revision-status-5\" not found\nInstallerControllerDegraded: missing required resources: configmaps: bound-sa-token-signing-certs-5,config-5,etcd-serving-ca-5,kube-apiserver-audit-policies-5,kube-apiserver-cert-syncer-kubeconfig-5,kube-apiserver-pod-5,kubelet-serving-ca-5,sa-token-signing-certs-5",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" | |
| (x13) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
StartingNewRevision |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
openshift-kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"revision-status-5\" not found\nInstallerControllerDegraded: missing required resources: configmaps: bound-sa-token-signing-certs-5,config-5,etcd-serving-ca-5,kube-apiserver-audit-policies-5,kube-apiserver-cert-syncer-kubeconfig-5,kube-apiserver-pod-5,kubelet-serving-ca-5,sa-token-signing-certs-5" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: unable to ApplyStatus for operator using fieldManager \"kube-apiserver-RevisionController\": KubeAPIServer.operator.openshift.io \"cluster\" is invalid: status.latestAvailableRevision: Invalid value: \"integer\": must only increase\nInstallerControllerDegraded: missing required resources: configmaps: bound-sa-token-signing-certs-5,config-5,etcd-serving-ca-5,kube-apiserver-audit-policies-5,kube-apiserver-cert-syncer-kubeconfig-5,kube-apiserver-pod-5,kubelet-serving-ca-5,sa-token-signing-certs-5" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
openshift-kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"revision-status-5\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"99996238-c997-4cd9-aef9-50e5ee960960\", ResourceVersion:\"14621\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 11, 30, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 11, 59, 22, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc001e912f0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
| (x12) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: cluster-policy-controller-config-4,config-4,controller-manager-kubeconfig-4,kube-controller-cert-syncer-kubeconfig-4,kube-controller-manager-pod-4,recycler-config-4,service-ca-4,serviceaccount-ca-4 |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest | |
| (x9) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
AllRequirementsMet |
all requirements found, attempting install |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-scheduler because it was missing | |
| (x6) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallCheckFailed |
install timeout |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
NeedsReinstall |
installing: waiting for deployment packageserver to become ready: waiting for spec update of deployment "packageserver" to be observed... | |
| (x8) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
waiting for install components to report healthy |
| (x3) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallWaiting |
installing: waiting for deployment packageserver to become ready: waiting for spec update of deployment "packageserver" to be observed... |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-controller-manager |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.88/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready"),Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4") | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-puller-binding -n openshift-machine-config-operator because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"99996238-c997-4cd9-aef9-50e5ee960960\", ResourceVersion:\"14621\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 11, 30, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 11, 59, 22, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc001e912f0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"99996238-c997-4cd9-aef9-50e5ee960960\", ResourceVersion:\"14621\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 11, 30, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 11, 59, 22, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc001e912f0), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
| (x17) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
openshift-kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: bound-sa-token-signing-certs-5,config-5,etcd-serving-ca-5,kube-apiserver-audit-policies-5,kube-apiserver-cert-syncer-kubeconfig-5,kube-apiserver-pod-5,kubelet-serving-ca-5,sa-token-signing-certs-5 |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/custom-machine-config-pool-selector because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/machine-configuration-guards because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-puller -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/managed-bootimages-platform-check because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/openshift-kube-scheduler-master-0 container \"kube-scheduler-cert-syncer\" is terminated: Error: i/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E1204 12:04:59.002104 1 reflector.go:158] \"Unhandled Error\" err=\"k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \\\"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\\\": tls: failed to verify certificate: x509: certificate signed by unknown authority\"\nStaticPodsDegraded: W1204 12:05:04.994903 1 reflector.go:561] k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E1204 12:05:04.995003 1 reflector.go:158] \"Unhandled Error\" err=\"k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \\\"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\\\": tls: failed to verify certificate: x509: certificate signed by unknown authority\"\nStaticPodsDegraded: W1204 12:05:54.325129 1 reflector.go:561] k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E1204 12:05:54.325198 1 reflector.go:158] \"Unhandled Error\" err=\"k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \\\"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\\\": tls: failed to verify certificate: x509: certificate signed by unknown authority\"\nStaticPodsDegraded: F1204 12:05:55.537438 1 base_controller.go:105] unable to sync caches for CertSyncController\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/machine-configuration-guards-binding because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: pod/openshift-kube-scheduler-master-0 container \"kube-scheduler-cert-syncer\" is terminated: Error: i/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E1204 12:04:59.002104 1 reflector.go:158] \"Unhandled Error\" err=\"k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \\\"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\\\": tls: failed to verify certificate: x509: certificate signed by unknown authority\"\nStaticPodsDegraded: W1204 12:05:04.994903 1 reflector.go:561] k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E1204 12:05:04.995003 1 reflector.go:158] \"Unhandled Error\" err=\"k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \\\"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\\\": tls: failed to verify certificate: x509: certificate signed by unknown authority\"\nStaticPodsDegraded: W1204 12:05:54.325129 1 reflector.go:561] k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate signed by unknown authority\nStaticPodsDegraded: E1204 12:05:54.325198 1 reflector.go:158] \"Unhandled Error\" err=\"k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \\\"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\\\": tls: failed to verify certificate: x509: certificate signed by unknown authority\"\nStaticPodsDegraded: F1204 12:05:55.537438 1 base_controller.go:105] unable to sync caches for CertSyncController\nStaticPodsDegraded: \nNodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-controller-7c6d64c4cd |
SuccessfulCreate |
Created pod: machine-config-controller-7c6d64c4cd-5wrwt | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/managed-bootimages-platform-check-binding because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/custom-machine-config-pool-selector-binding because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 4 to 5 because node master-0 with revision 4 is the oldest | |
openshift-machine-config-operator |
deployment-controller |
machine-config-controller |
ScalingReplicaSet |
Scaled up replica set machine-config-controller-7c6d64c4cd to 1 | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-7c6d64c4cd-5wrwt |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
openshift-kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: unable to ApplyStatus for operator using fieldManager \"kube-apiserver-RevisionController\": KubeAPIServer.operator.openshift.io \"cluster\" is invalid: status.latestAvailableRevision: Invalid value: \"integer\": must only increase\nInstallerControllerDegraded: missing required resources: configmaps: bound-sa-token-signing-certs-5,config-5,etcd-serving-ca-5,kube-apiserver-audit-policies-5,kube-apiserver-cert-syncer-kubeconfig-5,kube-apiserver-pod-5,kubelet-serving-ca-5,sa-token-signing-certs-5" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: configmaps: bound-sa-token-signing-certs-5,config-5,etcd-serving-ca-5,kube-apiserver-audit-policies-5,kube-apiserver-cert-syncer-kubeconfig-5,kube-apiserver-pod-5,kubelet-serving-ca-5,sa-token-signing-certs-5" | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-7c6d64c4cd-5wrwt |
Started |
Started container machine-config-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
openshift-kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 4 to 5 because node master-0 with revision 4 is the oldest | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods).\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-7c6d64c4cd-5wrwt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b472823604757237c2d16bd6f6221f4cf562aa3b05942c7f602e1e8b2e55a7c6" already present on machine | |
openshift-machine-config-operator |
multus |
machine-config-controller-7c6d64c4cd-5wrwt |
AddedInterface |
Add eth0 [10.128.0.89/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-7c6d64c4cd-5wrwt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-7c6d64c4cd-5wrwt |
Started |
Started container kube-rbac-proxy | |
openshift-ingress-canary |
daemonset-controller |
ingress-canary |
SuccessfulCreate |
Created pod: ingress-canary-pz4lp | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-7c6d64c4cd-5wrwt |
Created |
Created container: machine-config-controller | |
openshift-network-diagnostics |
multus |
network-check-source-85d8db45d4-5bjlq |
AddedInterface |
Add eth0 [10.128.0.94/23] from ovn-kubernetes | |
openshift-ingress |
kubelet |
router-default-5465c8b4db-58d52 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b3d313c599852b3543ee5c3a62691bd2d1bbad12c2e1c610cd71a1dec6eea32" | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29414160-dmjlv |
AddedInterface |
Add eth0 [10.128.0.91/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-5-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-network-diagnostics |
kubelet |
network-check-source-85d8db45d4-5bjlq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9724d2036305cbd729e1f484c5bad89971de977fff8a6723fef1873858dd1123" already present on machine | |
openshift-ingress-canary |
multus |
ingress-canary-pz4lp |
AddedInterface |
Add eth0 [10.128.0.93/23] from ovn-kubernetes | |
openshift-network-console |
kubelet |
networking-console-plugin-7d45bf9455-w67z7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2faf0b5a0c3da0538257e1bb8c87f26619b75fd3219fb673a9e5d1ef6ff2feb" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
openshift-kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: configmaps: bound-sa-token-signing-certs-5,config-5,etcd-serving-ca-5,kube-apiserver-audit-policies-5,kube-apiserver-cert-syncer-kubeconfig-5,kube-apiserver-pod-5,kubelet-serving-ca-5,sa-token-signing-certs-5" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29414160-dmjlv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-network-console |
multus |
networking-console-plugin-7d45bf9455-w67z7 |
AddedInterface |
Add eth0 [10.128.0.90/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29414160-dmjlv |
Started |
Started container collect-profiles | |
openshift-kube-scheduler |
kubelet |
installer-5-master-0 |
Started |
Started container installer | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29414160-dmjlv |
Created |
Created container: collect-profiles | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-7c85c4dffd-xv2wn |
AddedInterface |
Add eth0 [10.128.0.92/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-7c85c4dffd-xv2wn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d2d169850894a59fb18012f5b1cde98a7e30fa5b86967c9d16e4cba5e88d9a8d" | |
openshift-ingress-canary |
kubelet |
ingress-canary-pz4lp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:831f30660844091d6154e2674d3a9da6f34271bf8a2c40b56f7416066318742b" already present on machine | |
openshift-ingress-canary |
kubelet |
ingress-canary-pz4lp |
Created |
Created container: serve-healthcheck-canary | |
openshift-ingress-canary |
kubelet |
ingress-canary-pz4lp |
Started |
Started container serve-healthcheck-canary | |
openshift-kube-scheduler |
multus |
installer-5-master-0 |
AddedInterface |
Add eth0 [10.128.0.95/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-5-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine | |
openshift-kube-scheduler |
kubelet |
installer-5-master-0 |
Created |
Created container: installer | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
install strategy completed with no errors | |
openshift-network-diagnostics |
kubelet |
network-check-source-85d8db45d4-5bjlq |
Started |
Started container check-endpoints | |
openshift-network-diagnostics |
kubelet |
network-check-source-85d8db45d4-5bjlq |
Created |
Created container: check-endpoints | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing | |
openshift-ingress |
kubelet |
router-default-5465c8b4db-58d52 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b3d313c599852b3543ee5c3a62691bd2d1bbad12c2e1c610cd71a1dec6eea32" in 3.555s (3.555s including waiting). Image size: 481499222 bytes. | |
openshift-machine-config-operator |
kubelet |
machine-config-server-lh6sx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b472823604757237c2d16bd6f6221f4cf562aa3b05942c7f602e1e8b2e55a7c6" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
openshift-kube-apiserver-operator |
PodCreated |
Created Pod/installer-5-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-7c85c4dffd-xv2wn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d2d169850894a59fb18012f5b1cde98a7e30fa5b86967c9d16e4cba5e88d9a8d" in 2.681s (2.681s including waiting). Image size: 439040552 bytes. | |
openshift-network-console |
kubelet |
networking-console-plugin-7d45bf9455-w67z7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2faf0b5a0c3da0538257e1bb8c87f26619b75fd3219fb673a9e5d1ef6ff2feb" in 3.244s (3.244s including waiting). Image size: 440979905 bytes. | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-server |
SuccessfulCreate |
Created pod: machine-config-server-lh6sx | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveConsoleURL |
assetPublicURL changed from to https://console-openshift-console.apps.sno.openstack.lab |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-config -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nConsoleDefaultRouteSyncDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console\nRouteHealthDegraded: console route is not admitted" to "SyncLoopRefreshDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console\nRouteHealthDegraded: console route is not admitted",Upgradeable message changed from "ConsoleDefaultRouteSyncUpgradeable: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nDownloadsDefaultRouteSyncUpgradeable: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console" to "DownloadsDefaultRouteSyncUpgradeable: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console" | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentCreated |
Created Deployment.apps/console -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console\nRouteHealthDegraded: console route is not admitted" to "SyncLoopRefreshDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nRouteHealthDegraded: console route is not admitted",Upgradeable changed from False to True ("All is well") | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nRouteHealthDegraded: console route is not admitted" to "SyncLoopRefreshDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nRouteHealthDegraded: console route is not admitted" | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-654c77b6c6 to 1 | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-7c85c4dffd-xv2wn |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-7c85c4dffd-xv2wn |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-kube-apiserver |
multus |
installer-5-master-0 |
AddedInterface |
Add eth0 [10.128.0.96/23] from ovn-kubernetes | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: " map[string]any{\n \t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\"oauthConfig\": map[string]any{\n- \t\t\"assetPublicURL\": string(\"\"),\n+ \t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.sno.openstack.lab\"),\n \t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n \t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n \t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n \t},\n \t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n \t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.sno.openstack.lab\")}}}},\n \t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n }\n" |
openshift-ingress |
kubelet |
router-default-5465c8b4db-58d52 |
Created |
Created container: router | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Started |
Started container installer | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
worker |
RenderedConfigGenerated |
rendered-worker-32c9e3fc8fdd87ebd1d25d4d4be3125c successfully generated (release version: 4.18.29, controller version: bb2aa85171d93b2df952ed802a8cb200164e666f) | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
master |
RenderedConfigGenerated |
rendered-master-af5fc53c03f26de2a50e8c2bd4ef207b successfully generated (release version: 4.18.29, controller version: bb2aa85171d93b2df952ed802a8cb200164e666f) | |
openshift-network-console |
kubelet |
networking-console-plugin-7d45bf9455-w67z7 |
Started |
Started container networking-console-plugin | |
openshift-machine-config-operator |
kubelet |
machine-config-server-lh6sx |
Created |
Created container: machine-config-server | |
openshift-machine-config-operator |
kubelet |
machine-config-server-lh6sx |
Started |
Started container machine-config-server | |
openshift-ingress |
kubelet |
router-default-5465c8b4db-58d52 |
Started |
Started container router | |
openshift-network-console |
kubelet |
networking-console-plugin-7d45bf9455-w67z7 |
Created |
Created container: networking-console-plugin | |
openshift-console |
replicaset-controller |
console-654c77b6c6 |
SuccessfulCreate |
Created pod: console-654c77b6c6-kh7ws | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n default because it was missing | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-public -n openshift-config-managed because it was missing | |
openshift-console |
kubelet |
console-654c77b6c6-kh7ws |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e27a636083db9043e3e4bbdc336b5e7fb5693422246e443fd1d913e157f01d46" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
openshift-console |
multus |
console-654c77b6c6-kh7ws |
AddedInterface |
Add eth0 [10.128.0.97/23] from ovn-kubernetes | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-6dc95c8d8 to 1 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
openshift-monitoring |
deployment-controller |
prometheus-operator |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-6c74d9cb9f to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-6c74d9cb9f |
SuccessfulCreate |
Created pod: prometheus-operator-6c74d9cb9f-pxd98 | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29414160, condition: Complete | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29414160 |
Completed |
Job completed | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder-anyuid because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-builder -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: RequiredPoolsFailed |
Unable to apply 4.18.29: error during syncRequiredMachineConfigPools: context deadline exceeded | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "SyncLoopRefreshDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console\nRouteHealthDegraded: console route is not admitted" to "RouteHealthDegraded: console route is not admitted",Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available message changed from "RouteHealthAvailable: console route is not admitted" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: console route is not admitted" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
openshift-kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}},   "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, ...}, + "authConfig": map[string]any{ + "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), + },   "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")},   "gracefulTerminationDuration": string("15"),   ... // 2 identical entries   } | |
openshift-console |
replicaset-controller |
console-6dc95c8d8 |
SuccessfulCreate |
Created pod: console-6dc95c8d8-klv7m | |
openshift-console |
multus |
console-6dc95c8d8-klv7m |
AddedInterface |
Add eth0 [10.128.0.99/23] from ovn-kubernetes | |
openshift-authentication-operator |
cluster-authentication-operator-metadata-controller-openshift-authentication-metadata |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-authentication |
replicaset-controller |
oauth-openshift-d676f96d8 |
SuccessfulCreate |
Created pod: oauth-openshift-d676f96d8-88p47 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)",Progressing message changed from "" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-monitoring |
multus |
prometheus-operator-6c74d9cb9f-pxd98 |
AddedInterface |
Add eth0 [10.128.0.98/23] from ovn-kubernetes | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-d676f96d8 to 1 | |
openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing | |
openshift-authentication |
multus |
oauth-openshift-d676f96d8-88p47 |
AddedInterface |
Add eth0 [10.128.0.100/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-monitoring |
kubelet |
prometheus-operator-6c74d9cb9f-pxd98 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca1daf0b5b8e7f3f14effdd82b3ff227ad2706feb90490aa43f37fbbaa5903a0" | |
openshift-console |
kubelet |
console-6dc95c8d8-klv7m |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e27a636083db9043e3e4bbdc336b5e7fb5693422246e443fd1d913e157f01d46" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
StartingNewRevision |
new revision 6 triggered by "optional configmap/oauth-metadata has been created" | |
openshift-authentication |
kubelet |
oauth-openshift-d676f96d8-88p47 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8860e00f858d1bca98344f21b5a5c4acc43c9c6eca8216582514021f0ab3cf7b" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-6 -n openshift-kube-apiserver because it was missing | |
openshift-console |
kubelet |
console-654c77b6c6-kh7ws |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e27a636083db9043e3e4bbdc336b5e7fb5693422246e443fd1d913e157f01d46" in 7.902s (7.902s including waiting). Image size: 628330376 bytes. | |
openshift-console |
kubelet |
console-654c77b6c6-kh7ws |
Created |
Created container: console | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
ConfigMapUpdated |
Updated ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication: cause by changes in data.v4-0-config-system-cliconfig | |
openshift-console |
kubelet |
console-654c77b6c6-kh7ws |
Started |
Started container console | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
openshift-kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-d676f96d8 to 0 from 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-6 -n openshift-kube-apiserver because it was missing | |
openshift-console |
kubelet |
console-6dc95c8d8-klv7m |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e27a636083db9043e3e4bbdc336b5e7fb5693422246e443fd1d913e157f01d46" in 5.266s (5.266s including waiting). Image size: 628330376 bytes. | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-console |
kubelet |
console-6dc95c8d8-klv7m |
Started |
Started container console | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-6 -n openshift-kube-apiserver because it was missing | |
openshift-console |
kubelet |
console-6dc95c8d8-klv7m |
Created |
Created container: console | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:31aa3c7464"...)}},   "controllers": []any{   ... // 8 identical elements   string("openshift.io/deploymentconfig"),   string("openshift.io/image-import"),   strings.Join({ + "-",   "openshift.io/image-puller-rolebindings",   }, ""),   string("openshift.io/image-signature-import"),   string("openshift.io/image-trigger"),   ... // 2 identical elements   string("openshift.io/origin-namespace"),   string("openshift.io/serviceaccount"),   strings.Join({ + "-",   "openshift.io/serviceaccount-pull-secrets",   }, ""),   string("openshift.io/templateinstance"),   string("openshift.io/templateinstancefinalizer"),   string("openshift.io/unidling"),   },   "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:42c3f5030d"...)}},   "featureGates": []any{string("BuildCSIVolumes=true")},   "ingress": map[string]any{"ingressIPNetworkCIDR": string("")},   } |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-54f5d5c856 to 1 from 0 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-d676f96d8 |
SuccessfulDelete |
Deleted pod: oauth-openshift-d676f96d8-88p47 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
replicaset-controller |
oauth-openshift-54f5d5c856 |
SuccessfulCreate |
Created pod: oauth-openshift-54f5d5c856-4hhmn | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: console route is not admitted" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: console route is not admitted" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-6 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-5b466d87 to 1 from 0 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-54f5d5c856 to 0 from 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication |
replicaset-controller |
oauth-openshift-5b466d87 |
SuccessfulCreate |
Created pod: oauth-openshift-5b466d87-4hv4r | |
openshift-authentication |
replicaset-controller |
oauth-openshift-54f5d5c856 |
SuccessfulDelete |
Deleted pod: oauth-openshift-54f5d5c856-4hhmn | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-6 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
prometheus-operator-6c74d9cb9f-pxd98 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca1daf0b5b8e7f3f14effdd82b3ff227ad2706feb90490aa43f37fbbaa5903a0" in 13.519s (13.519s including waiting). Image size: 456037002 bytes. | |
openshift-monitoring |
kubelet |
prometheus-operator-6c74d9cb9f-pxd98 |
Started |
Started container prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-6c74d9cb9f-pxd98 |
Created |
Created container: prometheus-operator | |
openshift-authentication |
kubelet |
oauth-openshift-d676f96d8-88p47 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8860e00f858d1bca98344f21b5a5c4acc43c9c6eca8216582514021f0ab3cf7b" in 14.05s (14.05s including waiting). Image size: 475921340 bytes. | |
openshift-authentication |
kubelet |
oauth-openshift-d676f96d8-88p47 |
Started |
Started container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-d676f96d8-88p47 |
Created |
Created container: oauth-openshift | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3." | |
openshift-marketplace |
kubelet |
certified-operators-w4cqh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-6c74d9cb9f-pxd98 |
Created |
Created container: kube-rbac-proxy | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-marketplace |
multus |
certified-operators-w4cqh |
AddedInterface |
Add eth0 [10.128.0.101/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-monitoring |
kubelet |
prometheus-operator-6c74d9cb9f-pxd98 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-monitoring |
kubelet |
prometheus-operator-6c74d9cb9f-pxd98 |
Started |
Started container kube-rbac-proxy | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-6 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
| (x3) | openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed |
openshift-kube-controller-manager |
static-pod-installer |
installer-4-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
openshift-marketplace |
kubelet |
certified-operators-w4cqh |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-w4cqh |
Created |
Created container: extract-utilities | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager-cert-syncer | |
openshift-authentication |
kubelet |
oauth-openshift-d676f96d8-88p47 |
Killing |
Stopping container oauth-openshift | |
openshift-marketplace |
kubelet |
certified-operators-w4cqh |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
RevisionTriggered |
new revision 6 triggered by "optional configmap/oauth-metadata has been created" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
openshift-kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-6 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-w4cqh |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 4.86s (4.86s including waiting). Image size: 1205106509 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-w4cqh |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-w4cqh |
Started |
Started container extract-content | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-w4cqh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-w4cqh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 418ms (418ms including waiting). Image size: 912722556 bytes. | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Killing |
Stopping container installer | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler-recovery-controller | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-w4cqh |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-w4cqh |
Created |
Created container: registry-server | |
openshift-kube-scheduler |
static-pod-installer |
installer-5-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler-cert-syncer | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
openshift-kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 6" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-grpc-tls-65n1etrkcotip -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing | |
| (x3) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml |
| (x3) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
| (x3) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/grpc-tls -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing | |
| (x3) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 7, desired generation is 8.\nProgressing: deployment/route-controller-manager: observed generation is 7, desired generation is 8.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 4, desired generation is 5.") | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
openshift-kube-apiserver-operator |
PodCreated |
Created Pod/installer-6-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-kube-apiserver |
multus |
installer-6-master-0 |
AddedInterface |
Add eth0 [10.128.0.102/23] from ovn-kubernetes | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-grpc-tls-90vge34bcmpum -n openshift-monitoring because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d64c13fe7663a0b4ae61d103b1b7598adcf317a01826f296bcb66b1a2de83c96" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_eb1a5d39-d8e1-4149-b502-5c6c2a1b20ff became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_fdb1974b-d463-4010-99d9-c4593ae666e1 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-marketplace |
kubelet |
certified-operators-w4cqh |
Killing |
Stopping container registry-server | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-cloud-controller-manager-operator |
master-0_0bd9f3cf-fd1d-42b8-bdc1-b5702534e785 |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_0bd9f3cf-fd1d-42b8-bdc1-b5702534e785 became leader | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_8478f2ad-d709-44a0-824b-f94503a9e866 became leader | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_382af1fb-a26a-42b9-ab11-3f0bd111c560 became leader | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-marketplace |
multus |
community-operators-tnlxw |
AddedInterface |
Add eth0 [10.128.0.103/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-marketplace |
kubelet |
community-operators-tnlxw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" | |
openshift-marketplace |
kubelet |
community-operators-tnlxw |
Started |
Started container extract-utilities | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 3 to 4 because static pod is ready | |
openshift-marketplace |
kubelet |
community-operators-tnlxw |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-tnlxw |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 613ms (613ms including waiting). Image size: 1201434959 bytes. | |
openshift-marketplace |
kubelet |
community-operators-tnlxw |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-tnlxw |
Started |
Started container extract-content | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded changed from False to True ("RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'") | |
openshift-marketplace |
kubelet |
community-operators-tnlxw |
Created |
Created container: extract-content | |
openshift-authentication |
kubelet |
oauth-openshift-5b466d87-4hv4r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8860e00f858d1bca98344f21b5a5c4acc43c9c6eca8216582514021f0ab3cf7b" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-tnlxw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
community-operators-tnlxw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 505ms (505ms including waiting). Image size: 912722556 bytes. | |
openshift-authentication |
kubelet |
oauth-openshift-5b466d87-4hv4r |
Started |
Started container oauth-openshift | |
openshift-authentication |
multus |
oauth-openshift-5b466d87-4hv4r |
AddedInterface |
Add eth0 [10.128.0.104/23] from ovn-kubernetes | |
openshift-authentication |
kubelet |
oauth-openshift-5b466d87-4hv4r |
Created |
Created container: oauth-openshift | |
openshift-marketplace |
kubelet |
community-operators-tnlxw |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-tnlxw |
Created |
Created container: registry-server | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-af5fc53c03f26de2a50e8c2bd4ef207b | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 7, desired generation is 8.\nProgressing: deployment/route-controller-manager: observed generation is 7, desired generation is 8.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 4, desired generation is 5." to "Progressing: deployment/controller-manager: observed generation is 7, desired generation is 8.\nProgressing: deployment/route-controller-manager: observed generation is 7, desired generation is 8." | |
| (x5) | openshift-console |
kubelet |
console-654c77b6c6-kh7ws |
ProbeError |
Startup probe error: Get "https://10.128.0.97:8443/health": dial tcp 10.128.0.97:8443: connect: connection refused body: |
| (x5) | openshift-console |
kubelet |
console-654c77b6c6-kh7ws |
Unhealthy |
Startup probe failed: Get "https://10.128.0.97:8443/health": dial tcp 10.128.0.97:8443: connect: connection refused |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-af5fc53c03f26de2a50e8c2bd4ef207b | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/state=Done | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_2e66c737-0d70-4d8c-aa9b-093907e18330 became leader | |
openshift-console |
replicaset-controller |
console-64cdd44ddd |
SuccessfulCreate |
Created pod: console-64cdd44ddd-2t62p | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-5dcd7f7489 to 1 from 0 | |
openshift-controller-manager |
kubelet |
controller-manager-b644f86d6-mvlh8 |
Killing |
Stopping container controller-manager | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5dcd7f7489 |
SuccessfulCreate |
Created pod: controller-manager-5dcd7f7489-ts824 | |
openshift-monitoring |
deployment-controller |
openshift-state-metrics |
ScalingReplicaSet |
Scaled up replica set openshift-state-metrics-5974b6b869 to 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused" | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-64cdd44ddd to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-74fbf9d4cf to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-b644f86d6 to 0 from 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-monitoring |
deployment-controller |
monitoring-plugin |
ScalingReplicaSet |
Scaled up replica set monitoring-plugin-58f547f9c9 to 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.245:443/healthz\": dial tcp 172.30.202.245:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-557cff67c-7qs6t |
Killing |
Stopping container route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-557cff67c-7qs6t |
ProbeError |
Readiness probe error: Get "https://10.128.0.85:8443/healthz": dial tcp 10.128.0.85:8443: connect: connection refused body: | |
openshift-monitoring |
replicaset-controller |
kube-state-metrics-5857974f64 |
SuccessfulCreate |
Created pod: kube-state-metrics-5857974f64-4rstk | |
openshift-monitoring |
deployment-controller |
kube-state-metrics |
ScalingReplicaSet |
Scaled up replica set kube-state-metrics-5857974f64 to 1 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-654c77b6c6 to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-557cff67c to 0 from 1 | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-557cff67c-7qs6t |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.85:8443/healthz": dial tcp 10.128.0.85:8443: connect: connection refused | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-controller-manager |
replicaset-controller |
controller-manager-b644f86d6 |
SuccessfulDelete |
Deleted pod: controller-manager-b644f86d6-mvlh8 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-557cff67c |
SuccessfulDelete |
Deleted pod: route-controller-manager-557cff67c-7qs6t | |
openshift-monitoring |
deployment-controller |
telemeter-client |
ScalingReplicaSet |
Scaled up replica set telemeter-client-7487d49bdb to 1 | |
openshift-monitoring |
replicaset-controller |
telemeter-client-7487d49bdb |
SuccessfulCreate |
Created pod: telemeter-client-7487d49bdb-7f2xj | |
openshift-monitoring |
replicaset-controller |
metrics-server-88f9c775c |
SuccessfulCreate |
Created pod: metrics-server-88f9c775c-fw4ls | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-88f9c775c to 1 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-74fbf9d4cf |
SuccessfulCreate |
Created pod: route-controller-manager-74fbf9d4cf-77szk | |
openshift-monitoring |
replicaset-controller |
monitoring-plugin-58f547f9c9 |
SuccessfulCreate |
Created pod: monitoring-plugin-58f547f9c9-wnpsq | |
openshift-monitoring |
replicaset-controller |
openshift-state-metrics-5974b6b869 |
SuccessfulCreate |
Created pod: openshift-state-metrics-5974b6b869-5fzg8 | |
openshift-monitoring |
deployment-controller |
thanos-querier |
ScalingReplicaSet |
Scaled up replica set thanos-querier-6c5fbf6b84 to 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-monitoring |
replicaset-controller |
thanos-querier-6c5fbf6b84 |
SuccessfulCreate |
Created pod: thanos-querier-6c5fbf6b84-vvhts | |
openshift-console |
replicaset-controller |
console-654c77b6c6 |
SuccessfulDelete |
Deleted pod: console-654c77b6c6-kh7ws | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-z89ck | |
openshift-console |
kubelet |
console-654c77b6c6-kh7ws |
Killing |
Stopping container console | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config version changed from [] to [{operator 4.18.29} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b472823604757237c2d16bd6f6221f4cf562aa3b05942c7f602e1e8b2e55a7c6}] | |
openshift-monitoring |
kubelet |
monitoring-plugin-58f547f9c9-wnpsq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f228d55f3812fdc1e6b37262baea72b19443d64142aaf5ac748ff875b15a1c9a" | |
openshift-monitoring |
multus |
monitoring-plugin-58f547f9c9-wnpsq |
AddedInterface |
Add eth0 [10.128.0.105/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
kube-state-metrics-5857974f64-4rstk |
AddedInterface |
Add eth0 [10.128.0.107/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from True to False ("All is well") | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.108/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
node-exporter-z89ck |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df4cf41b98aaa1978e682187fd6d8e934d70cea9b500033fec197ffcb5c75ab6" | |
openshift-monitoring |
multus |
metrics-server-88f9c775c-fw4ls |
AddedInterface |
Add eth0 [10.128.0.106/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-64cdd44ddd-2t62p |
Created |
Created container: console | |
openshift-console |
kubelet |
console-64cdd44ddd-2t62p |
Started |
Started container console | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.109/23] from ovn-kubernetes | |
openshift-console |
multus |
console-64cdd44ddd-2t62p |
AddedInterface |
Add eth0 [10.128.0.113/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9a6271d3a19d3ceff897d9d414271723a984d7c45b94aa521b2c8aa20e95983" | |
openshift-monitoring |
multus |
thanos-querier-6c5fbf6b84-vvhts |
AddedInterface |
Add eth0 [10.128.0.112/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
telemeter-client-7487d49bdb-7f2xj |
AddedInterface |
Add eth0 [10.128.0.111/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
telemeter-client-7487d49bdb-7f2xj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:445efcbc0255b904e1584fe9be9a513c1a9784088e35dd0abbdff5cae0961861" | |
openshift-monitoring |
multus |
openshift-state-metrics-5974b6b869-5fzg8 |
AddedInterface |
Add eth0 [10.128.0.110/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
kube-state-metrics-5857974f64-4rstk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f41e33fa119d569ba903ae6b18ec7cf1626d8c24da6f8acf9bcbafef2f043ae" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5974b6b869-5fzg8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8240dce6c308012c91feac525db3c5df2d91c631d071881b61f0528929e904" | |
openshift-monitoring |
kubelet |
metrics-server-88f9c775c-fw4ls |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0824d9b793abc22c69ad35697e1bd3e725f07be0485f504d710ea1e8632d06ad" | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5974b6b869-5fzg8 |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5974b6b869-5fzg8 |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5974b6b869-5fzg8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5974b6b869-5fzg8 |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5974b6b869-5fzg8 |
Created |
Created container: kube-rbac-proxy-main | |
openshift-console |
kubelet |
console-64cdd44ddd-2t62p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e27a636083db9043e3e4bbdc336b5e7fb5693422246e443fd1d913e157f01d46" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5974b6b869-5fzg8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-tnlxw |
Killing |
Stopping container registry-server | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-route-controller-manager |
multus |
route-controller-manager-74fbf9d4cf-77szk |
AddedInterface |
Add eth0 [10.128.0.115/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-openshift" changed from "" to "4.18.29_openshift" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.29"} {"oauth-apiserver" "4.18.29"}] to [{"operator" "4.18.29"} {"oauth-apiserver" "4.18.29"} {"oauth-openshift" "4.18.29_openshift"}] | |
openshift-monitoring |
kubelet |
node-exporter-z89ck |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df4cf41b98aaa1978e682187fd6d8e934d70cea9b500033fec197ffcb5c75ab6" in 4.107s (4.107s including waiting). Image size: 412150422 bytes. | |
openshift-monitoring |
kubelet |
telemeter-client-7487d49bdb-7f2xj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:445efcbc0255b904e1584fe9be9a513c1a9784088e35dd0abbdff5cae0961861" in 4.329s (4.329s including waiting). Image size: 474996496 bytes. | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-74fbf9d4cf-77szk |
Created |
Created container: route-controller-manager | |
openshift-monitoring |
kubelet |
kube-state-metrics-5857974f64-4rstk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-5857974f64-4rstk |
Started |
Started container kube-state-metrics | |
openshift-monitoring |
kubelet |
kube-state-metrics-5857974f64-4rstk |
Created |
Created container: kube-state-metrics | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5974b6b869-5fzg8 |
Created |
Created container: openshift-state-metrics | |
openshift-monitoring |
kubelet |
kube-state-metrics-5857974f64-4rstk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f41e33fa119d569ba903ae6b18ec7cf1626d8c24da6f8acf9bcbafef2f043ae" in 4.667s (4.667s including waiting). Image size: 435019272 bytes. | |
openshift-monitoring |
kubelet |
monitoring-plugin-58f547f9c9-wnpsq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f228d55f3812fdc1e6b37262baea72b19443d64142aaf5ac748ff875b15a1c9a" in 5.205s (5.205s including waiting). Image size: 442268087 bytes. | |
openshift-monitoring |
kubelet |
metrics-server-88f9c775c-fw4ls |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0824d9b793abc22c69ad35697e1bd3e725f07be0485f504d710ea1e8632d06ad" in 4.57s (4.57s including waiting). Image size: 465894629 bytes. | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-74fbf9d4cf-77szk |
Started |
Started container route-controller-manager | |
openshift-monitoring |
kubelet |
node-exporter-z89ck |
Started |
Started container init-textfile | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" in 4.547s (4.547s including waiting). Image size: 432377377 bytes. | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-74fbf9d4cf-77szk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c416b201d480bddb5a4960ec42f4740761a1335001cf84ba5ae19ad6857771b1" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-z89ck |
Created |
Created container: init-textfile | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" in 4.582s (4.582s including waiting). Image size: 432377377 bytes. | |
openshift-monitoring |
kubelet |
monitoring-plugin-58f547f9c9-wnpsq |
Created |
Created container: monitoring-plugin | |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9a6271d3a19d3ceff897d9d414271723a984d7c45b94aa521b2c8aa20e95983" in 4.346s (4.346s including waiting). Image size: 497172184 bytes. | |
openshift-controller-manager |
multus |
controller-manager-5dcd7f7489-ts824 |
AddedInterface |
Add eth0 [10.128.0.114/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-5dcd7f7489-ts824 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eddedae7578d79b5a3f748000ae5c00b9f14a04710f9f9ec7b52fc569be5dfb8" already present on machine | |
openshift-monitoring |
kubelet |
monitoring-plugin-58f547f9c9-wnpsq |
Started |
Started container monitoring-plugin | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5974b6b869-5fzg8 |
Started |
Started container openshift-state-metrics | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5974b6b869-5fzg8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8240dce6c308012c91feac525db3c5df2d91c631d071881b61f0528929e904" in 4.137s (4.137s including waiting). Image size: 426442164 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-z89ck |
Created |
Created container: kube-rbac-proxy | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-74fbf9d4cf-77szk_e31a650f-676f-4bd1-ab7f-a4de97cfcf63 became leader | |
openshift-controller-manager |
kubelet |
controller-manager-5dcd7f7489-ts824 |
Created |
Created container: controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-5dcd7f7489-ts824 |
Started |
Started container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-5dcd7f7489-ts824 |
ProbeError |
Readiness probe error: Get "https://10.128.0.114:8443/healthz": dial tcp 10.128.0.114:8443: connect: connection refused body: | |
openshift-controller-manager |
kubelet |
controller-manager-5dcd7f7489-ts824 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.114:8443/healthz": dial tcp 10.128.0.114:8443: connect: connection refused | |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c84b5ebe858246af77fb40b85b6ea917fa2a4a651b740cd3320d461164d0ef8" | |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Started |
Started container thanos-query | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d91f263cf6eef98d53e83e218e32a55576ebdd31daa8f6abd33b8866c3d5c4" | |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Created |
Created container: thanos-query | |
openshift-monitoring |
kubelet |
kube-state-metrics-5857974f64-4rstk |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
kube-state-metrics-5857974f64-4rstk |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
kube-state-metrics-5857974f64-4rstk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-5857974f64-4rstk |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
kube-state-metrics-5857974f64-4rstk |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
metrics-server-88f9c775c-fw4ls |
Created |
Created container: metrics-server | |
openshift-monitoring |
kubelet |
metrics-server-88f9c775c-fw4ls |
Started |
Started container metrics-server | |
openshift-monitoring |
kubelet |
telemeter-client-7487d49bdb-7f2xj |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-7487d49bdb-7f2xj |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-7487d49bdb-7f2xj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-7487d49bdb-7f2xj |
Started |
Started container reload | |
openshift-monitoring |
kubelet |
telemeter-client-7487d49bdb-7f2xj |
Created |
Created container: reload | |
openshift-monitoring |
kubelet |
telemeter-client-7487d49bdb-7f2xj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
telemeter-client-7487d49bdb-7f2xj |
Started |
Started container telemeter-client | |
openshift-monitoring |
kubelet |
telemeter-client-7487d49bdb-7f2xj |
Created |
Created container: telemeter-client | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-5dcd7f7489-ts824 became leader | |
openshift-monitoring |
kubelet |
node-exporter-z89ck |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df4cf41b98aaa1978e682187fd6d8e934d70cea9b500033fec197ffcb5c75ab6" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-z89ck |
Created |
Created container: node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-z89ck |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-z89ck |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-z89ck |
Started |
Started container kube-rbac-proxy | |
| (x5) | openshift-console |
kubelet |
console-6dc95c8d8-klv7m |
Unhealthy |
Startup probe failed: Get "https://10.128.0.99:8443/health": dial tcp 10.128.0.99:8443: connect: connection refused |
| (x5) | openshift-console |
kubelet |
console-6dc95c8d8-klv7m |
ProbeError |
Startup probe error: Get "https://10.128.0.99:8443/health": dial tcp 10.128.0.99:8443: connect: connection refused body: |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91795c7ae050c24ea79ae91b18a4e39a1a527b046deecf7fc795c22caf0b3f59" | |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c84b5ebe858246af77fb40b85b6ea917fa2a4a651b740cd3320d461164d0ef8" in 1.291s (1.291s including waiting). Image size: 407565857 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Created |
Created container: kube-rbac-proxy-metrics | |
openshift-console |
kubelet |
console-6dc95c8d8-klv7m |
Killing |
Stopping container console | |
openshift-console |
multus |
console-75b84c855f-2zcgd |
AddedInterface |
Add eth0 [10.128.0.116/23] from ovn-kubernetes | |
openshift-console |
replicaset-controller |
console-6dc95c8d8 |
SuccessfulDelete |
Deleted pod: console-6dc95c8d8-klv7m | |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Created |
Created container: kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Started |
Started container kube-rbac-proxy-rules | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-6dc95c8d8 to 0 from 1 | |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Started |
Started container kube-rbac-proxy-metrics | |
| (x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.29, 0 replicas available" to "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" |
openshift-monitoring |
kubelet |
thanos-querier-6c5fbf6b84-vvhts |
Created |
Created container: prom-label-proxy | |
openshift-console |
replicaset-controller |
console-75b84c855f |
SuccessfulCreate |
Created pod: console-75b84c855f-2zcgd | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-75b84c855f to 1 from 0 | |
| (x3) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.29, 0 replicas available" |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91795c7ae050c24ea79ae91b18a4e39a1a527b046deecf7fc795c22caf0b3f59" in 3.371s (3.371s including waiting). Image size: 462002699 bytes. | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d91f263cf6eef98d53e83e218e32a55576ebdd31daa8f6abd33b8866c3d5c4" in 4.436s (4.436s including waiting). Image size: 600165109 bytes. | |
openshift-console |
kubelet |
console-75b84c855f-2zcgd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e27a636083db9043e3e4bbdc336b5e7fb5693422246e443fd1d913e157f01d46" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-console |
kubelet |
console-75b84c855f-2zcgd |
Started |
Started container console | |
openshift-console |
kubelet |
console-75b84c855f-2zcgd |
Created |
Created container: console | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9a6271d3a19d3ceff897d9d414271723a984d7c45b94aa521b2c8aa20e95983" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c84b5ebe858246af77fb40b85b6ea917fa2a4a651b740cd3320d461164d0ef8" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Uncordon |
Update completed for config rendered-master-af5fc53c03f26de2a50e8c2bd4ef207b and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/reason= | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-af5fc53c03f26de2a50e8c2bd4ef207b | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
NodeDone |
Setting node master-0, currentConfig rendered-master-af5fc53c03f26de2a50e8c2bd4ef207b to Done | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well") | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller |
etcd-operator |
EtcdCertSignerControllerUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineOSBuilderFailed |
Failed to resync 4.18.29 because: failed to apply machine os builder manifests: Get "https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/machine-os-builder": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/authorization.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"build.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/build.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"image.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/image.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"project.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/project.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"quota.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/quota.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"route.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/route.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"security.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/security.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"template.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/template.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-8jwk5 |
Created |
Created container: machine-config-daemon |
| (x3) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-8jwk5 |
Unhealthy |
Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-8jwk5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b472823604757237c2d16bd6f6221f4cf562aa3b05942c7f602e1e8b2e55a7c6" already present on machine |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-8jwk5 |
Started |
Started container machine-config-daemon |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-8jwk5 |
Killing |
Container machine-config-daemon failed liveness probe, will be restarted | |
| (x3) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-8jwk5 |
ProbeError |
Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body: |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-machine-config-operator |
machine-config-operator |
openshift-machine-config-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_6aca35ff-e375-4d3b-a42c-8dbcb164cb6d became leader | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
NodeDone |
Setting node master-0, currentConfig rendered-master-af5fc53c03f26de2a50e8c2bd4ef207b to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-af5fc53c03f26de2a50e8c2bd4ef207b | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Uncordon |
Update completed for config rendered-master-af5fc53c03f26de2a50e8c2bd4ef207b and node has been uncordoned | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_1f1578d6-11ea-467c-8522-967834a76311 became leader | |
| (x12) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdateFailed |
Failed to update Secret/service-account-private-key -n openshift-kube-controller-manager: Put "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/service-account-private-key": dial tcp 172.30.0.1:443: connect: connection refused |
| (x13) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigPoolsFailed |
Failed to resync 4.18.29 because: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master": dial tcp 172.30.0.1:443: connect: connection refused |
| (x5) | openshift-console |
kubelet |
console-75b84c855f-2zcgd |
ProbeError |
Startup probe error: Get "https://10.128.0.116:8443/health": dial tcp 10.128.0.116:8443: connect: connection refused body: |
| (x5) | openshift-console |
kubelet |
console-75b84c855f-2zcgd |
Unhealthy |
Startup probe failed: Get "https://10.128.0.116:8443/health": dial tcp 10.128.0.116:8443: connect: connection refused |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
APIServiceCreated |
Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
| (x6) | openshift-console |
kubelet |
console-64cdd44ddd-2t62p |
Unhealthy |
Startup probe failed: Get "https://10.128.0.113:8443/health": dial tcp 10.128.0.113:8443: connect: connection refused |
| (x6) | openshift-console |
kubelet |
console-64cdd44ddd-2t62p |
ProbeError |
Startup probe error: Get "https://10.128.0.113:8443/health": dial tcp 10.128.0.113:8443: connect: connection refused body: |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdated |
Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing | |
openshift-console |
kubelet |
console-64cdd44ddd-2t62p |
Killing |
Stopping container console | |
openshift-console |
replicaset-controller |
console-64cdd44ddd |
SuccessfulDelete |
Deleted pod: console-64cdd44ddd-2t62p | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-64cdd44ddd to 0 from 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "",Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"99996238-c997-4cd9-aef9-50e5ee960960\", ResourceVersion:\"17659\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 11, 30, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 12, 4, 6, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002c3f428), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-cloud-controller-manager-operator |
master-0_ceb19d21-60aa-4560-adf7-e03fa8483a9e |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_ceb19d21-60aa-4560-adf7-e03fa8483a9e became leader | |
openshift-marketplace |
kubelet |
marketplace-operator-f797b99b6-hjjrk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7664a2d4cb10e82ed32abbf95799f43fc3d10135d7dd94799730de504a89680a" already present on machine | |
openshift-marketplace |
kubelet |
marketplace-operator-f797b99b6-hjjrk |
Created |
Created container: marketplace-operator | |
openshift-marketplace |
kubelet |
marketplace-operator-f797b99b6-hjjrk |
Started |
Started container marketplace-operator | |
openshift-marketplace |
kubelet |
redhat-operators-v2gh5 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-v2gh5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
multus |
redhat-operators-v2gh5 |
AddedInterface |
Add eth0 [10.128.0.117/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
redhat-operators-gzs5w |
AddedInterface |
Add eth0 [10.128.0.118/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-gzs5w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-gzs5w |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-gzs5w |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-v2gh5 |
Created |
Created container: extract-utilities | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 4 to 5 because static pod is ready | |
openshift-marketplace |
kubelet |
redhat-operators-v2gh5 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
multus |
redhat-operators-mwsqm |
AddedInterface |
Add eth0 [10.128.0.119/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-mwsqm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-gzs5w |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-mwsqm |
Created |
Created container: extract-utilities | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
kubelet |
redhat-operators-v2gh5 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 1.044s (1.044s including waiting). Image size: 1610175307 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-mwsqm |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-v2gh5 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-v2gh5 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-gzs5w |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-gzs5w |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-gzs5w |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 1.089s (1.089s including waiting). Image size: 1610175307 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-gzs5w |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-wgn8s |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-gzs5w |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-gzs5w |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
multus |
redhat-operators-wgn8s |
AddedInterface |
Add eth0 [10.128.0.120/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-wgn8s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-gzs5w |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 379ms (379ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-mwsqm |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-wgn8s |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-v2gh5 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-v2gh5 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-v2gh5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-operators-v2gh5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 408ms (408ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-wgn8s |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-mwsqm |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 815ms (815ms including waiting). Image size: 1610175307 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-wgn8s |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 515ms (515ms including waiting). Image size: 1610175307 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-mwsqm |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-mwsqm |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-wgn8s |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-wgn8s |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-mwsqm |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-mwsqm |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-mwsqm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 380ms (380ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-mwsqm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-operators-wgn8s |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-operators-wgn8s |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 498ms (498ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-brsq4 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-brsq4 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-pdr77 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-wgn8s |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-pdr77 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-wgn8s |
Created |
Created container: registry-server | |
openshift-marketplace |
multus |
redhat-operators-brsq4 |
AddedInterface |
Add eth0 [10.128.0.121/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-pdr77 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
multus |
redhat-marketplace-pdr77 |
AddedInterface |
Add eth0 [10.128.0.122/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-brsq4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-brsq4 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-brsq4 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-brsq4 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 1.134s (1.134s including waiting). Image size: 1610175307 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-brsq4 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-pdr77 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-pdr77 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 630ms (630ms including waiting). Image size: 1129027903 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-pdr77 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-pdr77 |
Created |
Created container: extract-content | |
openshift-marketplace |
multus |
redhat-operators-b7n8z |
AddedInterface |
Add eth0 [10.128.0.123/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-b7n8z |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-brsq4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-operators-b7n8z |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-pdr77 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-operators-b7n8z |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-b7n8z |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
| (x11) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdateFailed |
Failed to update Secret/service-account-private-key -n openshift-kube-controller-manager: Operation cannot be fulfilled on secrets "service-account-private-key": the object has been modified; please apply your changes to the latest version and try again |
openshift-marketplace |
kubelet |
redhat-operators-brsq4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 444ms (444ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-brsq4 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-pdr77 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 719ms (719ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-pdr77 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-pdr77 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-b7n8z |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 586ms (586ms including waiting). Image size: 1610175307 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-b7n8z |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-b7n8z |
Started |
Started container extract-content | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
redhat-operators-brsq4 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-xgzlt |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-xgzlt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-b7n8z |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-operators-xgzlt |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
multus |
redhat-operators-xgzlt |
AddedInterface |
Add eth0 [10.128.0.124/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-5 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
redhat-operators-xgzlt |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-mwsqm |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
openshift-marketplace |
kubelet |
redhat-operators-b7n8z |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 361ms (361ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-xgzlt |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 553ms (553ms including waiting). Image size: 1610175307 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-xgzlt |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-wgn8s |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
openshift-marketplace |
kubelet |
redhat-operators-b7n8z |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-b7n8z |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-xgzlt |
Started |
Started container extract-content | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-5 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
redhat-operators-xgzlt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
openshift-kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-5 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
multus |
redhat-operators-jjfcc |
AddedInterface |
Add eth0 [10.128.0.125/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-xgzlt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 6.572s (6.572s including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-jjfcc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
redhat-operators-xgzlt |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-xgzlt |
Started |
Started container registry-server | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 5 triggered by "required secret/service-account-private-key has changed" | |
openshift-marketplace |
kubelet |
redhat-operators-jjfcc |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-p5ndf |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-p5ndf |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-p5ndf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-jjfcc |
Started |
Started container extract-utilities | |
openshift-marketplace |
multus |
redhat-operators-p5ndf |
AddedInterface |
Add eth0 [10.128.0.126/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-jjfcc |
Created |
Created container: extract-utilities | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"99996238-c997-4cd9-aef9-50e5ee960960\", ResourceVersion:\"17659\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 11, 30, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 12, 4, 6, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002c3f428), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-marketplace |
kubelet |
redhat-operators-p5ndf |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
multus |
redhat-marketplace-fhdw5 |
AddedInterface |
Add eth0 [10.128.0.127/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-jjfcc |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 1.051s (1.051s including waiting). Image size: 1610175307 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-fhdw5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-jjfcc |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-jjfcc |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-fhdw5 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-fhdw5 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-fhdw5 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-jjfcc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-operators-jjfcc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 1.71s (1.71s including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-p5ndf |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-fhdw5 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-fhdw5 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-jjfcc |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-jjfcc |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-fhdw5 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 2.478s (2.478s including waiting). Image size: 1129027903 bytes. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5" | |
openshift-marketplace |
kubelet |
redhat-operators-p5ndf |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 3.287s (3.287s including waiting). Image size: 1610175307 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-p5ndf |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-rq9mp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
multus |
redhat-operators-rxp4h |
AddedInterface |
Add eth0 [10.128.0.128/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-rxp4h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-rxp4h |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-rxp4h |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-rxp4h |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
multus |
redhat-marketplace-rq9mp |
AddedInterface |
Add eth0 [10.128.0.129/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-rq9mp |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-rq9mp |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-rq9mp |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-fhdw5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-operators-p5ndf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-marketplace-fhdw5 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-fhdw5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 396ms (396ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-fhdw5 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-rq9mp |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-rq9mp |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-rxp4h |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 1.03s (1.03s including waiting). Image size: 1610175307 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-rq9mp |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 963ms (963ms including waiting). Image size: 1129027903 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-rxp4h |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-p5ndf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 482ms (482ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-p5ndf |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-p5ndf |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-rxp4h |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-rq9mp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-marketplace-zjmf2 |
Started |
Started container extract-utilities | |
openshift-marketplace |
multus |
redhat-marketplace-zjmf2 |
AddedInterface |
Add eth0 [10.128.0.130/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-rq9mp |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-rq9mp |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-rq9mp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 516ms (516ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-zjmf2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-zjmf2 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-zjmf2 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-rxp4h |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-operators-rxp4h |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-rxp4h |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 357ms (357ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-zjmf2 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-p5ndf |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulDelete |
delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container kube-rbac-proxy-metric | |
openshift-marketplace |
kubelet |
redhat-marketplace-zjmf2 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-zjmf2 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 1.679s (1.679s including waiting). Image size: 1129027903 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-rxp4h |
Started |
Started container registry-server | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container alertmanager | |
openshift-marketplace |
kubelet |
redhat-marketplace-zjmf2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
multus |
redhat-marketplace-ndng9 |
AddedInterface |
Add eth0 [10.128.0.131/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-ndng9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-ndng9 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-ndng9 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-zjmf2 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-ndng9 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-ndng9 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 897ms (897ms including waiting). Image size: 1129027903 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-zjmf2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 2.397s (2.397s including waiting). Image size: 912722556 bytes. | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-marketplace |
kubelet |
redhat-marketplace-zjmf2 |
Started |
Started container registry-server | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded changed from True to False ("All is well"),Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-marketplace |
multus |
redhat-marketplace-g7j5b |
AddedInterface |
Add eth0 [10.128.0.132/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-ndng9 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-g7j5b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-rxp4h |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
openshift-marketplace |
kubelet |
redhat-marketplace-ndng9 |
Created |
Created container: extract-content | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.133/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-g7j5b |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-ndng9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-marketplace-ndng9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 1.811s (1.811s including waiting). Image size: 912722556 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-marketplace |
kubelet |
redhat-marketplace-g7j5b |
Started |
Started container extract-utilities | |
openshift-marketplace |
multus |
redhat-marketplace-7j7ql |
AddedInterface |
Add eth0 [10.128.0.134/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-ndng9 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-ndng9 |
Started |
Started container registry-server | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91795c7ae050c24ea79ae91b18a4e39a1a527b046deecf7fc795c22caf0b3f59" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-g7j5b |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-marketplace |
kubelet |
redhat-marketplace-7j7ql |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-g7j5b |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 2.554s (2.554s including waiting). Image size: 1129027903 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-g7j5b |
Created |
Created container: extract-content | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-g7j5b |
Started |
Started container extract-content | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-6fbd8c7bd5 to 1 | |
openshift-console |
replicaset-controller |
console-6fbd8c7bd5 |
SuccessfulCreate |
Created pod: console-6fbd8c7bd5-6tskd | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-marketplace |
kubelet |
redhat-marketplace-7j7ql |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-7j7ql |
Created |
Created container: extract-utilities | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-marketplace |
kubelet |
redhat-marketplace-7j7ql |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-2xqkk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
multus |
redhat-marketplace-2xqkk |
AddedInterface |
Add eth0 [10.128.0.135/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c84b5ebe858246af77fb40b85b6ea917fa2a4a651b740cd3320d461164d0ef8" already present on machine | |
openshift-console |
multus |
console-6fbd8c7bd5-6tskd |
AddedInterface |
Add eth0 [10.128.0.136/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-console |
kubelet |
console-6fbd8c7bd5-6tskd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e27a636083db9043e3e4bbdc336b5e7fb5693422246e443fd1d913e157f01d46" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-7j7ql |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-2xqkk |
Started |
Started container extract-content | |
openshift-console |
kubelet |
console-6fbd8c7bd5-6tskd |
Started |
Started container console | |
openshift-console |
kubelet |
console-6fbd8c7bd5-6tskd |
Created |
Created container: console | |
openshift-marketplace |
kubelet |
redhat-marketplace-g7j5b |
Started |
Started container registry-server | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-marketplace |
kubelet |
redhat-marketplace-g7j5b |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 400ms (400ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-g7j5b |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-g7j5b |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-marketplace-2xqkk |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-2xqkk |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-2xqkk |
Created |
Created container: extract-utilities | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/06-clusterrole-catalogd-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/10-clusterrole-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/11-clusterrole-operator-controller-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/12-clusterrole-operator-controller-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/13-rolebinding-openshift-config-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/14-rolebinding-openshift-operator-controller-operator-controller-leader-election-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-leader-election-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " | |
openshift-marketplace |
kubelet |
redhat-marketplace-2xqkk |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-2xqkk |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 527ms (527ms including waiting). Image size: 1129027903 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-7j7ql |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 907ms (907ms including waiting). Image size: 1129027903 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-7j7ql |
Created |
Created container: extract-content | |
| (x14) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 5 triggered by "required secret/service-account-private-key has changed" |
openshift-marketplace |
kubelet |
redhat-marketplace-7j7ql |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-marketplace-7j7ql |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 403ms (403ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-7j7ql |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-7j7ql |
Started |
Started container registry-server | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/06-clusterrole-catalogd-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/10-clusterrole-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/11-clusterrole-operator-controller-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/12-clusterrole-operator-controller-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/13-rolebinding-openshift-config-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/14-rolebinding-openshift-operator-controller-operator-controller-leader-election-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-leader-election-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/10-clusterrole-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/11-clusterrole-operator-controller-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/12-clusterrole-operator-controller-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/13-rolebinding-openshift-config-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/14-rolebinding-openshift-operator-controller-operator-controller-leader-election-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-leader-election-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " | |
openshift-marketplace |
kubelet |
redhat-marketplace-2xqkk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 380ms (380ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-2xqkk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-marketplace-2xqkk |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-2xqkk |
Started |
Started container registry-server | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"99996238-c997-4cd9-aef9-50e5ee960960\", ResourceVersion:\"17659\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 11, 30, 1, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 12, 4, 6, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002c3f428), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "All is well" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well") | |
openshift-marketplace |
kubelet |
redhat-operators-gzs5w |
Killing |
Stopping container registry-server | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/10-clusterrole-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/11-clusterrole-operator-controller-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/12-clusterrole-operator-controller-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/13-rolebinding-openshift-config-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/14-rolebinding-openshift-operator-controller-operator-controller-leader-election-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-leader-election-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "All is well" | |
openshift-marketplace |
kubelet |
redhat-operators-p5ndf |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-b7n8z |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-jjfcc |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-v2gh5 |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-xgzlt |
Killing |
Stopping container registry-server | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"revision-status-5\" not found\nSATokenSignerDegraded: Operation cannot be fulfilled on secrets \"service-account-private-key\": the object has been modified; please apply your changes to the latest version and try again\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://172.30.0.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-marketplace |
kubelet |
redhat-operators-wgn8s |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-mwsqm |
Killing |
Stopping container registry-server | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"revision-status-5\" not found\nSATokenSignerDegraded: Operation cannot be fulfilled on secrets \"service-account-private-key\": the object has been modified; please apply your changes to the latest version and try again\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://172.30.0.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://172.30.0.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-marketplace |
kubelet |
redhat-operators-brsq4 |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-rxp4h |
Killing |
Stopping container registry-server | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container prometheus | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulDelete |
delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 4 to 5 because node master-0 with revision 4 is the oldest | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container kube-rbac-proxy | |
| (x2) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdateFailed |
Failed to update Deployment.apps/console -n openshift-console: Operation cannot be fulfilled on deployments.apps "console": the object has been modified; please apply your changes to the latest version and try again |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-75b84c855f to 0 from 1 | |
openshift-console |
replicaset-controller |
console-75b84c855f |
SuccessfulDelete |
Deleted pod: console-75b84c855f-2zcgd | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-console |
kubelet |
console-75b84c855f-2zcgd |
Killing |
Stopping container console | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.137/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-console |
replicaset-controller |
console-6475766b4d |
SuccessfulCreate |
Created pod: console-6475766b4d-m2nml | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-6475766b4d to 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://172.30.0.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-5-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9a6271d3a19d3ceff897d9d414271723a984d7c45b94aa521b2c8aa20e95983" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-kube-controller-manager |
kubelet |
installer-5-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-5-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-5-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine | |
openshift-kube-controller-manager |
multus |
installer-5-master-0 |
AddedInterface |
Add eth0 [10.128.0.138/23] from ovn-kubernetes | |
openshift-console |
multus |
console-6475766b4d-m2nml |
AddedInterface |
Add eth0 [10.128.0.139/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-6475766b4d-m2nml |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e27a636083db9043e3e4bbdc336b5e7fb5693422246e443fd1d913e157f01d46" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d91f263cf6eef98d53e83e218e32a55576ebdd31daa8f6abd33b8866c3d5c4" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-console |
kubelet |
console-6475766b4d-m2nml |
Created |
Created container: console | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-console |
kubelet |
console-6475766b4d-m2nml |
Started |
Started container console | |
openshift-marketplace |
kubelet |
redhat-marketplace-fhdw5 |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-rq9mp |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-pdr77 |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-7j7ql |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-g7j5b |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-2xqkk |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-zjmf2 |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-ndng9 |
Killing |
Stopping container registry-server | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: working toward version 4.18.29, 1 replicas available") | |
openshift-console |
replicaset-controller |
console-6fbd8c7bd5 |
SuccessfulDelete |
Deleted pod: console-6fbd8c7bd5-6tskd | |
openshift-console |
kubelet |
console-6fbd8c7bd5-6tskd |
Killing |
Stopping container console | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-6fbd8c7bd5 to 0 from 1 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager |
static-pod-installer |
installer-5-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager-cert-syncer | |
openshift-apiserver-operator |
openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
openshift-apiserver-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
openshift-kube-apiserver-operator |
CustomResourceDefinitionCreateFailed |
Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d64c13fe7663a0b4ae61d103b1b7598adcf317a01826f296bcb66b1a2de83c96" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_4bf38fe2-5de9-4db2-8446-3074dfa78404 became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_9538d849-a71f-4629-9adb-6f35f0e72930 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 4 to 5 because static pod is ready | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for sushy-emulator namespace | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_6c96b0e8-4c2e-4144-b992-294dff86bc28 became leader | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-storage namespace | |
openshift-marketplace |
job-controller |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54 |
SuccessfulCreate |
Created pod: 7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42bd6t | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42bd6t |
Started |
Started container util | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42bd6t |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42bd6t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
multus |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42bd6t |
AddedInterface |
Add eth0 [10.128.0.141/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42bd6t |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42bd6t |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42bd6t |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42bd6t |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" in 1.154s (1.154s including waiting). Image size: 108204 bytes. | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42bd6t |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42bd6t |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" already present on machine | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d42bd6t |
Started |
Started container extract | |
openshift-marketplace |
job-controller |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54 |
Completed |
Job completed | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsUnknown |
requirements not yet checked | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
AllRequirementsMet |
all requirements found, attempting install |
openshift-storage |
deployment-controller |
lvms-operator |
ScalingReplicaSet |
Scaled up replica set lvms-operator-67f88ff75f to 1 | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallWaiting |
installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability. |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
waiting for install components to report healthy | |
openshift-storage |
replicaset-controller |
lvms-operator-67f88ff75f |
SuccessfulCreate |
Created pod: lvms-operator-67f88ff75f-5j2p2 | |
openshift-storage |
kubelet |
lvms-operator-67f88ff75f-5j2p2 |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" | |
openshift-storage |
multus |
lvms-operator-67f88ff75f-5j2p2 |
AddedInterface |
Add eth0 [10.128.0.142/23] from ovn-kubernetes | |
openshift-storage |
kubelet |
lvms-operator-67f88ff75f-5j2p2 |
Created |
Created container: manager | |
openshift-storage |
kubelet |
lvms-operator-67f88ff75f-5j2p2 |
Started |
Started container manager | |
openshift-storage |
kubelet |
lvms-operator-67f88ff75f-5j2p2 |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 4.529s (4.529s including waiting). Image size: 238305644 bytes. | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
install strategy completed with no errors | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for metallb-system namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-nmstate namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for cert-manager-operator namespace | |
openshift-marketplace |
job-controller |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a36aa3 |
SuccessfulCreate |
Created pod: 1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ag6hs7 | |
openshift-marketplace |
multus |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ag6hs7 |
AddedInterface |
Add eth0 [10.128.0.143/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ag6hs7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ag6hs7 |
Started |
Started container util | |
openshift-marketplace |
job-controller |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8344397 |
SuccessfulCreate |
Created pod: af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83265nd | |
openshift-marketplace |
multus |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83265nd |
AddedInterface |
Add eth0 [10.128.0.144/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ag6hs7 |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ag6hs7 |
Pulling |
Pulling image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:acaaea813059d4ac5b2618395bd9113f72ada0a33aaaba91aa94f000e77df407" | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83265nd |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fc4dd100d3f8058c7412f5923ce97b810a15130df1c117206bf90e95f0b51a0a" | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83265nd |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83265nd |
Started |
Started container util | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83265nd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
job-controller |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f90ea3 |
SuccessfulCreate |
Created pod: 5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fzc2gr | |
openshift-marketplace |
multus |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fzc2gr |
AddedInterface |
Add eth0 [10.128.0.145/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fzc2gr |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:6d20aa78e253f44695ba748e195e2e7b832008d5a1d41cf66e7cb6def58a5f47" | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fzc2gr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fzc2gr |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fzc2gr |
Started |
Started container util | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fzc2gr |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:6d20aa78e253f44695ba748e195e2e7b832008d5a1d41cf66e7cb6def58a5f47" in 2.131s (2.131s including waiting). Image size: 176484 bytes. | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83265nd |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fzc2gr |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fzc2gr |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83265nd |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fc4dd100d3f8058c7412f5923ce97b810a15130df1c117206bf90e95f0b51a0a" in 3.14s (3.14s including waiting). Image size: 329358 bytes. | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83265nd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" already present on machine | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ag6hs7 |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ag6hs7 |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ag6hs7 |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:acaaea813059d4ac5b2618395bd9113f72ada0a33aaaba91aa94f000e77df407" in 4.156s (4.156s including waiting). Image size: 105944483 bytes. | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83265nd |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ag6hs7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" already present on machine | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fzc2gr |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ag6hs7 |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83265nd |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fzc2gr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" already present on machine | |
openshift-marketplace |
kubelet |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ag6hs7 |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fzc2gr |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83265nd |
Started |
Started container extract | |
openshift-marketplace |
job-controller |
1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a36aa3 |
Completed |
Job completed | |
openshift-marketplace |
job-controller |
af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8344397 |
Completed |
Job completed | |
openshift-marketplace |
job-controller |
5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f90ea3 |
Completed |
Job completed | |
openshift-marketplace |
job-controller |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92100b6b5 |
SuccessfulCreate |
Created pod: 6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921049l57 | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921049l57 |
Started |
Started container util | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921049l57 |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921049l57 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
multus |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921049l57 |
AddedInterface |
Add eth0 [10.128.0.146/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921049l57 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:04d900c45998f21ccf96af1ba6b8c7485d13c676ca365d70b491f7dcc48974ac" | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921049l57 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:04d900c45998f21ccf96af1ba6b8c7485d13c676ca365d70b491f7dcc48974ac" in 2.216s (2.216s including waiting). Image size: 4896371 bytes. | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921049l57 |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921049l57 |
Created |
Created container: pull | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202511181540 |
RequirementsUnknown |
requirements not yet checked | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921049l57 |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921049l57 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" already present on machine | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202511181540 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openshift-marketplace |
kubelet |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c921049l57 |
Created |
Created container: extract | |
openshift-marketplace |
job-controller |
6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92100b6b5 |
Completed |
Job completed | |
default |
cert-manager-istio-csr-controller |
ControllerStarted |
controller is starting | ||
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for cert-manager namespace | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202511191213 |
RequirementsUnknown |
requirements not yet checked | |
cert-manager |
deployment-controller |
cert-manager-cainjector |
ScalingReplicaSet |
Scaled up replica set cert-manager-cainjector-855d9ccff4 to 1 | |
cert-manager |
replicaset-controller |
cert-manager-webhook-f4fb5df64 |
SuccessfulCreate |
Created pod: cert-manager-webhook-f4fb5df64-42npf | |
| (x6) | cert-manager |
replicaset-controller |
cert-manager-webhook-f4fb5df64 |
FailedCreate |
Error creating: pods "cert-manager-webhook-f4fb5df64-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found |
cert-manager |
deployment-controller |
cert-manager-webhook |
ScalingReplicaSet |
Scaled up replica set cert-manager-webhook-f4fb5df64 to 1 | |
cert-manager |
deployment-controller |
cert-manager |
ScalingReplicaSet |
Scaled up replica set cert-manager-86cb77c54b to 1 | |
openshift-nmstate |
replicaset-controller |
nmstate-operator-5b5b58f5c8 |
SuccessfulCreate |
Created pod: nmstate-operator-5b5b58f5c8-kv2p5 | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202511191213 |
AllRequirementsMet |
all requirements found, attempting install | |
cert-manager |
multus |
cert-manager-webhook-f4fb5df64-42npf |
AddedInterface |
Add eth0 [10.128.0.148/23] from ovn-kubernetes | |
cert-manager |
kubelet |
cert-manager-webhook-f4fb5df64-42npf |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202511191213 |
InstallSucceeded |
waiting for install components to report healthy | |
openshift-nmstate |
deployment-controller |
nmstate-operator |
ScalingReplicaSet |
Scaled up replica set nmstate-operator-5b5b58f5c8 to 1 | |
openshift-nmstate |
multus |
nmstate-operator-5b5b58f5c8-kv2p5 |
AddedInterface |
Add eth0 [10.128.0.149/23] from ovn-kubernetes | |
openshift-nmstate |
kubelet |
nmstate-operator-5b5b58f5c8-kv2p5 |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:dd89e08ed6257597e99b1243839d5c76e6bad72fe9e168c0eba5ce9c449189cf" | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202511191213 |
InstallWaiting |
installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability. | |
| (x10) | cert-manager |
replicaset-controller |
cert-manager-cainjector-855d9ccff4 |
FailedCreate |
Error creating: pods "cert-manager-cainjector-855d9ccff4-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found |
cert-manager |
replicaset-controller |
cert-manager-cainjector-855d9ccff4 |
SuccessfulCreate |
Created pod: cert-manager-cainjector-855d9ccff4-lh4km | |
| (x11) | cert-manager |
replicaset-controller |
cert-manager-86cb77c54b |
FailedCreate |
Error creating: pods "cert-manager-86cb77c54b-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found |
metallb-system |
operator-lifecycle-manager |
install-d9zl7 |
AppliedWithWarnings |
1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202511181540" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2 | |
metallb-system |
replicaset-controller |
metallb-operator-controller-manager-6f8cddc44c |
SuccessfulCreate |
Created pod: metallb-operator-controller-manager-6f8cddc44c-f979v | |
metallb-system |
deployment-controller |
metallb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set metallb-operator-controller-manager-6f8cddc44c to 1 | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.0 |
RequirementsUnknown |
requirements not yet checked | |
metallb-system |
deployment-controller |
metallb-operator-webhook-server |
ScalingReplicaSet |
Scaled up replica set metallb-operator-webhook-server-567cbcbb98 to 1 | |
cert-manager |
kubelet |
cert-manager-webhook-f4fb5df64-42npf |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" in 10.426s (10.426s including waiting). Image size: 427346153 bytes. | |
cert-manager |
kubelet |
cert-manager-cainjector-855d9ccff4-lh4km |
Pulled |
Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-operator-5b5b58f5c8-kv2p5 |
Created |
Created container: nmstate-operator | |
openshift-nmstate |
kubelet |
nmstate-operator-5b5b58f5c8-kv2p5 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:dd89e08ed6257597e99b1243839d5c76e6bad72fe9e168c0eba5ce9c449189cf" in 9.299s (9.299s including waiting). Image size: 445876816 bytes. | |
metallb-system |
replicaset-controller |
metallb-operator-webhook-server-567cbcbb98 |
SuccessfulCreate |
Created pod: metallb-operator-webhook-server-567cbcbb98-h2n4q | |
openshift-nmstate |
kubelet |
nmstate-operator-5b5b58f5c8-kv2p5 |
Started |
Started container nmstate-operator | |
cert-manager |
multus |
cert-manager-cainjector-855d9ccff4-lh4km |
AddedInterface |
Add eth0 [10.128.0.150/23] from ovn-kubernetes | |
cert-manager |
replicaset-controller |
cert-manager-86cb77c54b |
SuccessfulCreate |
Created pod: cert-manager-86cb77c54b-db45f | |
cert-manager |
kubelet |
cert-manager-webhook-f4fb5df64-42npf |
Created |
Created container: cert-manager-webhook | |
cert-manager |
kubelet |
cert-manager-webhook-f4fb5df64-42npf |
Started |
Started container cert-manager-webhook | |
metallb-system |
multus |
metallb-operator-controller-manager-6f8cddc44c-f979v |
AddedInterface |
Add eth0 [10.128.0.151/23] from ovn-kubernetes | |
metallb-system |
kubelet |
metallb-operator-controller-manager-6f8cddc44c-f979v |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:113daf5589fc8d963b942a3ab0fc20408aa6ed44e34019539e0e3252bb11297a" | |
cert-manager |
kubelet |
cert-manager-cainjector-855d9ccff4-lh4km |
Started |
Started container cert-manager-cainjector | |
cert-manager |
kubelet |
cert-manager-cainjector-855d9ccff4-lh4km |
Created |
Created container: cert-manager-cainjector | |
cert-manager |
multus |
cert-manager-86cb77c54b-db45f |
AddedInterface |
Add eth0 [10.128.0.153/23] from ovn-kubernetes | |
metallb-system |
multus |
metallb-operator-webhook-server-567cbcbb98-h2n4q |
AddedInterface |
Add eth0 [10.128.0.152/23] from ovn-kubernetes | |
metallb-system |
kubelet |
metallb-operator-webhook-server-567cbcbb98-h2n4q |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:afa5a50746f3d69cef22c41c612ce3e7fe91e1da1d1d1566dee42ee304132379" | |
kube-system |
cert-manager-cainjector-855d9ccff4-lh4km_32054349-888f-4fe2-aba3-5013a0307ea2 |
cert-manager-cainjector-leader-election |
LeaderElection |
cert-manager-cainjector-855d9ccff4-lh4km_32054349-888f-4fe2-aba3-5013a0307ea2 became leader | |
cert-manager |
kubelet |
cert-manager-86cb77c54b-db45f |
Pulled |
Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" already present on machine | |
cert-manager |
kubelet |
cert-manager-86cb77c54b-db45f |
Created |
Created container: cert-manager-controller | |
cert-manager |
kubelet |
cert-manager-86cb77c54b-db45f |
Started |
Started container cert-manager-controller | |
| (x2) | openshift-operators |
controllermanager |
obo-prometheus-operator-admission-webhook |
NoPods |
No matching pods found |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202511191213 |
InstallSucceeded |
install strategy completed with no errors | |
metallb-system |
kubelet |
metallb-operator-webhook-server-567cbcbb98-h2n4q |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:afa5a50746f3d69cef22c41c612ce3e7fe91e1da1d1d1566dee42ee304132379" in 9.776s (9.776s including waiting). Image size: 549581950 bytes. | |
metallb-system |
kubelet |
metallb-operator-controller-manager-6f8cddc44c-f979v |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:113daf5589fc8d963b942a3ab0fc20408aa6ed44e34019539e0e3252bb11297a" in 12.958s (12.958s including waiting). Image size: 457005415 bytes. | |
metallb-system |
kubelet |
metallb-operator-webhook-server-567cbcbb98-h2n4q |
Created |
Created container: webhook-server | |
| (x2) | openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.0 |
RequirementsNotMet |
one or more requirements couldn't be found |
metallb-system |
metallb-operator-controller-manager-6f8cddc44c-f979v_7c3af138-d12d-4eca-b8a3-fc04d1ce8a29 |
metallb.io.metallboperator |
LeaderElection |
metallb-operator-controller-manager-6f8cddc44c-f979v_7c3af138-d12d-4eca-b8a3-fc04d1ce8a29 became leader | |
metallb-system |
kubelet |
metallb-operator-webhook-server-567cbcbb98-h2n4q |
Started |
Started container webhook-server | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.0 |
AllRequirementsMet |
all requirements found, attempting install | |
openshift-operators |
deployment-controller |
obo-prometheus-operator |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-668cf9dfbb to 1 | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-74c6d8bb8 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-74c6d8bb8-ckdnn | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-74c6d8bb8 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-74c6d8bb8-jqkr2 | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-668cf9dfbb |
SuccessfulCreate |
Created pod: obo-prometheus-operator-668cf9dfbb-tllrd | |
openshift-operators |
replicaset-controller |
perses-operator-5446b9c989 |
SuccessfulCreate |
Created pod: perses-operator-5446b9c989-mcgjr | |
openshift-operators |
deployment-controller |
perses-operator |
ScalingReplicaSet |
Scaled up replica set perses-operator-5446b9c989 to 1 | |
openshift-operators |
deployment-controller |
obo-prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-admission-webhook-74c6d8bb8 to 2 | |
openshift-operators |
replicaset-controller |
observability-operator-d8bb48f5d |
SuccessfulCreate |
Created pod: observability-operator-d8bb48f5d-wc4bd | |
openshift-operators |
deployment-controller |
observability-operator |
ScalingReplicaSet |
Scaled up replica set observability-operator-d8bb48f5d to 1 | |
openshift-operators |
kubelet |
observability-operator-d8bb48f5d-wc4bd |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb" | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.0 |
InstallSucceeded |
waiting for install components to report healthy | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-74c6d8bb8-ckdnn |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-74c6d8bb8-ckdnn |
AddedInterface |
Add eth0 [10.128.0.155/23] from ovn-kubernetes | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-74c6d8bb8-jqkr2 |
AddedInterface |
Add eth0 [10.128.0.156/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-74c6d8bb8-jqkr2 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" | |
openshift-operators |
kubelet |
perses-operator-5446b9c989-mcgjr |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385" | |
openshift-operators |
multus |
observability-operator-d8bb48f5d-wc4bd |
AddedInterface |
Add eth0 [10.128.0.157/23] from ovn-kubernetes | |
openshift-operators |
multus |
obo-prometheus-operator-668cf9dfbb-tllrd |
AddedInterface |
Add eth0 [10.128.0.154/23] from ovn-kubernetes | |
openshift-operators |
multus |
perses-operator-5446b9c989-mcgjr |
AddedInterface |
Add eth0 [10.128.0.158/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
obo-prometheus-operator-668cf9dfbb-tllrd |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3" | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.0 |
InstallWaiting |
installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-74c6d8bb8-ckdnn |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
observability-operator-d8bb48f5d-wc4bd |
Created |
Created container: operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-74c6d8bb8-ckdnn |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-668cf9dfbb-tllrd |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3" in 10.717s (10.717s including waiting). Image size: 306562378 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-74c6d8bb8-jqkr2 |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-74c6d8bb8-jqkr2 |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
observability-operator-d8bb48f5d-wc4bd |
Started |
Started container operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-668cf9dfbb-tllrd |
Started |
Started container prometheus-operator | |
openshift-operators |
kubelet |
observability-operator-d8bb48f5d-wc4bd |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb" in 10.659s (10.659s including waiting). Image size: 500139589 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-74c6d8bb8-ckdnn |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" in 10.609s (10.609s including waiting). Image size: 258533084 bytes. | |
openshift-operators |
kubelet |
perses-operator-5446b9c989-mcgjr |
Started |
Started container perses-operator | |
openshift-operators |
kubelet |
perses-operator-5446b9c989-mcgjr |
Created |
Created container: perses-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-74c6d8bb8-jqkr2 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" in 10.561s (10.561s including waiting). Image size: 258533084 bytes. | |
openshift-operators |
kubelet |
perses-operator-5446b9c989-mcgjr |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385" in 10.534s (10.534s including waiting). Image size: 282278649 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-668cf9dfbb-tllrd |
Created |
Created container: prometheus-operator | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.0 |
InstallWaiting |
installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability. | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.0 |
InstallSucceeded |
install strategy completed with no errors | |
kube-system |
cert-manager-leader-election |
cert-manager-controller |
LeaderElection |
cert-manager-86cb77c54b-db45f-external-cert-manager-controller became leader | |
metallb-system |
replicaset-controller |
frr-k8s-webhook-server-7fcb986d4 |
SuccessfulCreate |
Created pod: frr-k8s-webhook-server-7fcb986d4-hfnqb | |
default |
garbage-collector-controller |
frr-k8s-validating-webhook-configuration |
OwnerRefInvalidNamespace |
ownerRef [metallb.io/v1beta1/MetalLB, namespace: , name: metallb, uid: 4c1ea53d-cbb6-490f-a50d-dc18973908ad] does not exist in namespace "" | |
metallb-system |
deployment-controller |
frr-k8s-webhook-server |
ScalingReplicaSet |
Scaled up replica set frr-k8s-webhook-server-7fcb986d4 to 1 | |
metallb-system |
daemonset-controller |
frr-k8s |
SuccessfulCreate |
Created pod: frr-k8s-ckzx9 | |
metallb-system |
kubelet |
frr-k8s-webhook-server-7fcb986d4-hfnqb |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "frr-k8s-webhook-server-cert" not found | |
metallb-system |
daemonset-controller |
speaker |
SuccessfulCreate |
Created pod: speaker-868rl | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" | |
metallb-system |
deployment-controller |
controller |
ScalingReplicaSet |
Scaled up replica set controller-f8648f98b to 1 | |
metallb-system |
replicaset-controller |
controller-f8648f98b |
SuccessfulCreate |
Created pod: controller-f8648f98b-v2fsc | |
metallb-system |
multus |
frr-k8s-webhook-server-7fcb986d4-hfnqb |
AddedInterface |
Add eth0 [10.128.0.159/23] from ovn-kubernetes | |
metallb-system |
multus |
controller-f8648f98b-v2fsc |
AddedInterface |
Add eth0 [10.128.0.160/23] from ovn-kubernetes | |
| (x3) | metallb-system |
kubelet |
speaker-868rl |
FailedMount |
MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found |
metallb-system |
kubelet |
controller-f8648f98b-v2fsc |
Created |
Created container: controller | |
metallb-system |
kubelet |
frr-k8s-webhook-server-7fcb986d4-hfnqb |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" | |
metallb-system |
kubelet |
controller-f8648f98b-v2fsc |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9" | |
metallb-system |
kubelet |
controller-f8648f98b-v2fsc |
Started |
Started container controller | |
metallb-system |
kubelet |
controller-f8648f98b-v2fsc |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:afa5a50746f3d69cef22c41c612ce3e7fe91e1da1d1d1566dee42ee304132379" already present on machine | |
openshift-nmstate |
deployment-controller |
nmstate-metrics |
ScalingReplicaSet |
Scaled up replica set nmstate-metrics-7f946cbc9 to 1 | |
openshift-nmstate |
replicaset-controller |
nmstate-metrics-7f946cbc9 |
SuccessfulCreate |
Created pod: nmstate-metrics-7f946cbc9-xmkjr | |
metallb-system |
kubelet |
speaker-868rl |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9" | |
metallb-system |
kubelet |
speaker-868rl |
Started |
Started container speaker | |
metallb-system |
kubelet |
speaker-868rl |
Created |
Created container: speaker | |
openshift-nmstate |
deployment-controller |
nmstate-webhook |
ScalingReplicaSet |
Scaled up replica set nmstate-webhook-5f6d4c5ccb to 1 | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f6d4c5ccb-zg9dh |
FailedMount |
MountVolume.SetUp failed for volume "tls-key-pair" : secret "openshift-nmstate-webhook" not found | |
openshift-nmstate |
daemonset-controller |
nmstate-handler |
SuccessfulCreate |
Created pod: nmstate-handler-qthsq | |
metallb-system |
kubelet |
speaker-868rl |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:afa5a50746f3d69cef22c41c612ce3e7fe91e1da1d1d1566dee42ee304132379" already present on machine | |
openshift-nmstate |
replicaset-controller |
nmstate-webhook-5f6d4c5ccb |
SuccessfulCreate |
Created pod: nmstate-webhook-5f6d4c5ccb-zg9dh | |
openshift-nmstate |
multus |
nmstate-webhook-5f6d4c5ccb-zg9dh |
AddedInterface |
Add eth0 [10.128.0.162/23] from ovn-kubernetes | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-648b6fc966 to 1 | |
| (x5) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapUpdated |
Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"console.openshift.io" "consoleplugins" "" "nmstate-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected") | |
openshift-console |
replicaset-controller |
console-648b6fc966 |
SuccessfulCreate |
Created pod: console-648b6fc966-ml84x | |
metallb-system |
kubelet |
controller-f8648f98b-v2fsc |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
controller-f8648f98b-v2fsc |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
controller-f8648f98b-v2fsc |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9" in 1.5s (1.5s including waiting). Image size: 459552216 bytes. | |
openshift-nmstate |
kubelet |
nmstate-handler-qthsq |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97" | |
openshift-nmstate |
deployment-controller |
nmstate-console-plugin |
ScalingReplicaSet |
Scaled up replica set nmstate-console-plugin-7fbb5f6569 to 1 | |
openshift-nmstate |
replicaset-controller |
nmstate-console-plugin-7fbb5f6569 |
SuccessfulCreate |
Created pod: nmstate-console-plugin-7fbb5f6569-gdhpp | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.29, 1 replicas available" | |
openshift-nmstate |
multus |
nmstate-metrics-7f946cbc9-xmkjr |
AddedInterface |
Add eth0 [10.128.0.161/23] from ovn-kubernetes | |
| (x22) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/console -n openshift-console because it changed |
openshift-nmstate |
multus |
nmstate-console-plugin-7fbb5f6569-gdhpp |
AddedInterface |
Add eth0 [10.128.0.163/23] from ovn-kubernetes | |
metallb-system |
kubelet |
speaker-868rl |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9" in 1.102s (1.102s including waiting). Image size: 459552216 bytes. | |
metallb-system |
kubelet |
speaker-868rl |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-868rl |
Started |
Started container kube-rbac-proxy | |
openshift-console |
multus |
console-648b6fc966-ml84x |
AddedInterface |
Add eth0 [10.128.0.164/23] from ovn-kubernetes | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f6d4c5ccb-zg9dh |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97" | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-7fbb5f6569-gdhpp |
Pulling |
Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:10fe26b1ef17d6fa13d22976b553b935f1cc14e74b8dd14a31306554aff7c513" | |
openshift-nmstate |
kubelet |
nmstate-metrics-7f946cbc9-xmkjr |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97" | |
openshift-nmstate |
kubelet |
nmstate-metrics-7f946cbc9-xmkjr |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-nmstate |
kubelet |
nmstate-metrics-7f946cbc9-xmkjr |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97" in 12.206s (12.206s including waiting). Image size: 492626754 bytes. | |
openshift-nmstate |
kubelet |
nmstate-metrics-7f946cbc9-xmkjr |
Created |
Created container: nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-metrics-7f946cbc9-xmkjr |
Started |
Started container nmstate-metrics | |
metallb-system |
kubelet |
frr-k8s-webhook-server-7fcb986d4-hfnqb |
Started |
Started container frr-k8s-webhook-server | |
openshift-nmstate |
kubelet |
nmstate-metrics-7f946cbc9-xmkjr |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-metrics-7f946cbc9-xmkjr |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-handler-qthsq |
Created |
Created container: nmstate-handler | |
metallb-system |
kubelet |
frr-k8s-webhook-server-7fcb986d4-hfnqb |
Created |
Created container: frr-k8s-webhook-server | |
metallb-system |
kubelet |
frr-k8s-webhook-server-7fcb986d4-hfnqb |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" in 14.839s (14.839s including waiting). Image size: 656503086 bytes. | |
openshift-console |
kubelet |
console-648b6fc966-ml84x |
Started |
Started container console | |
openshift-nmstate |
kubelet |
nmstate-handler-qthsq |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97" in 13.306s (13.306s including waiting). Image size: 492626754 bytes. | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f6d4c5ccb-zg9dh |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97" in 12.373s (12.373s including waiting). Image size: 492626754 bytes. | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f6d4c5ccb-zg9dh |
Created |
Created container: nmstate-webhook | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" in 18.003s (18.003s including waiting). Image size: 656503086 bytes. | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Created |
Created container: cp-frr-files | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Started |
Started container cp-frr-files | |
openshift-console |
kubelet |
console-648b6fc966-ml84x |
Created |
Created container: console | |
openshift-nmstate |
kubelet |
nmstate-webhook-5f6d4c5ccb-zg9dh |
Started |
Started container nmstate-webhook | |
openshift-nmstate |
kubelet |
nmstate-handler-qthsq |
Started |
Started container nmstate-handler | |
openshift-console |
kubelet |
console-648b6fc966-ml84x |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e27a636083db9043e3e4bbdc336b5e7fb5693422246e443fd1d913e157f01d46" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Created |
Created container: cp-reloader | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Started |
Started container cp-reloader | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-7fbb5f6569-gdhpp |
Started |
Started container nmstate-console-plugin | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-7fbb5f6569-gdhpp |
Created |
Created container: nmstate-console-plugin | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-7fbb5f6569-gdhpp |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:10fe26b1ef17d6fa13d22976b553b935f1cc14e74b8dd14a31306554aff7c513" in 12.724s (12.724s including waiting). Image size: 447845824 bytes. | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Created |
Created container: cp-metrics | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Started |
Started container cp-metrics | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Created |
Created container: frr | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Created |
Created container: controller | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Started |
Started container controller | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Started |
Started container frr | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Created |
Created container: reloader | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Started |
Started container frr-metrics | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Created |
Created container: frr-metrics | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine | |
metallb-system |
kubelet |
frr-k8s-ckzx9 |
Started |
Started container reloader | |
openshift-console |
replicaset-controller |
console-6475766b4d |
SuccessfulDelete |
Deleted pod: console-6475766b4d-m2nml | |
| (x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from True to False ("All is well") |
| (x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.29, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.29, 2 replicas available" |
openshift-console |
kubelet |
console-6475766b4d-m2nml |
Killing |
Stopping container console | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-6475766b4d to 0 from 1 | |
openshift-storage |
daemonset-controller |
vg-manager |
SuccessfulCreate |
Created pod: vg-manager-d999v | |
openshift-storage |
multus |
vg-manager-d999v |
AddedInterface |
Add eth0 [10.128.0.165/23] from ovn-kubernetes | |
| (x2) | openshift-storage |
kubelet |
vg-manager-d999v |
Pulled |
Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine |
| (x2) | openshift-storage |
kubelet |
vg-manager-d999v |
Created |
Created container: vg-manager |
| (x2) | openshift-storage |
kubelet |
vg-manager-d999v |
Started |
Started container vg-manager |
| (x12) | openshift-storage |
LVMClusterReconciler |
lvmcluster |
ResourceReconciliationIncomplete |
LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openstack namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openstack-operators namespace | |
| (x7) | default |
operator-lifecycle-manager |
openstack-operators |
ResolutionFailed |
error using catalogsource openstack-operators/openstack-operator-index: no registry client established for catalogsource openstack-operators/openstack-operator-index |
openstack-operators |
multus |
openstack-operator-index-jg8xj |
AddedInterface |
Add eth0 [10.128.0.166/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-index-jg8xj |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" | |
openstack-operators |
kubelet |
openstack-operator-index-jg8xj |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 1.554s (1.554s including waiting). Image size: 913061645 bytes. | |
openstack-operators |
kubelet |
openstack-operator-index-jg8xj |
Started |
Started container registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-jg8xj |
Created |
Created container: registry-server | |
| (x3) | default |
operator-lifecycle-manager |
openstack-operators |
ResolutionFailed |
error using catalogsource openstack-operators/openstack-operator-index: failed to list bundles: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 172.30.112.34:50051: connect: connection refused" |
openstack-operators |
job-controller |
917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eaf13dca |
SuccessfulCreate |
Created pod: 917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafcwbgj | |
openstack-operators |
kubelet |
917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafcwbgj |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:908b28281d04717fb2b938119e146b840fe78221" | |
openstack-operators |
kubelet |
917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafcwbgj |
Started |
Started container util | |
openstack-operators |
multus |
917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafcwbgj |
AddedInterface |
Add eth0 [10.128.0.167/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafcwbgj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openstack-operators |
kubelet |
917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafcwbgj |
Created |
Created container: util | |
openstack-operators |
kubelet |
917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafcwbgj |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:908b28281d04717fb2b938119e146b840fe78221" in 894ms (894ms including waiting). Image size: 108094 bytes. | |
openstack-operators |
kubelet |
917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafcwbgj |
Created |
Created container: pull | |
openstack-operators |
kubelet |
917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafcwbgj |
Started |
Started container pull | |
openstack-operators |
kubelet |
917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafcwbgj |
Created |
Created container: extract | |
openstack-operators |
kubelet |
917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafcwbgj |
Started |
Started container extract | |
openstack-operators |
kubelet |
917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafcwbgj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" already present on machine | |
openstack-operators |
job-controller |
917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eaf13dca |
Completed |
Job completed | |
| (x2) | openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
RequirementsUnknown |
requirements not yet checked |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
AllRequirementsMet |
all requirements found, attempting install | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-operator-55b6fb9447 |
SuccessfulCreate |
Created pod: openstack-operator-controller-operator-55b6fb9447-zn55t | |
openstack-operators |
deployment-controller |
openstack-operator-controller-operator |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-operator-55b6fb9447 to 1 | |
openstack-operators |
multus |
openstack-operator-controller-operator-55b6fb9447-zn55t |
AddedInterface |
Add eth0 [10.128.0.168/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-55b6fb9447-zn55t |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:a930bf4711e92a6bdc8a5ddb01a63d3a647a7db5f9ddd19bc897cb74292b8365" | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-operator to become ready: deployment "openstack-operator-controller-operator" not available: Deployment does not have minimum availability. | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-55b6fb9447-zn55t |
Created |
Created container: operator | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-55b6fb9447-zn55t |
Started |
Started container operator | |
openstack-operators |
openstack-operator-controller-operator-55b6fb9447-zn55t_7687eff6-afc1-49c7-8d04-9f500d38b4d2 |
20ca801f.openstack.org |
LeaderElection |
openstack-operator-controller-operator-55b6fb9447-zn55t_7687eff6-afc1-49c7-8d04-9f500d38b4d2 became leader | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-55b6fb9447-zn55t |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:a930bf4711e92a6bdc8a5ddb01a63d3a647a7db5f9ddd19bc897cb74292b8365" in 7.219s (7.219s including waiting). Image size: 292248395 bytes. | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29414175 | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29414175 |
SuccessfulCreate |
Created pod: collect-profiles-29414175-675jb | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29414175-675jb |
AddedInterface |
Add eth0 [10.128.0.169/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29414175-675jb |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29414175-675jb |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29414175-675jb |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29414175 |
Completed |
Job completed | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29414175, condition: Complete | |
| (x3) | openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
InstallSucceeded |
waiting for install components to report healthy |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
ComponentUnhealthy |
installing: deployment changed old hash=9Zx1Pfxu1GV6XSrh2RXcaGGtDDAgCDaP0BggWV, new hash=33j7GRyXkuPk9Y00zVUrb0O3dfF1GW8SncTE56 | |
openstack-operators |
deployment-controller |
openstack-operator-controller-operator |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-operator-589d7b4556 to 1 | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-operator-589d7b4556 |
SuccessfulCreate |
Created pod: openstack-operator-controller-operator-589d7b4556-d9qth | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-589d7b4556-d9qth |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:a930bf4711e92a6bdc8a5ddb01a63d3a647a7db5f9ddd19bc897cb74292b8365" already present on machine | |
| (x2) | openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-operator to become ready: waiting for spec update of deployment "openstack-operator-controller-operator" to be observed... |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-operator to become ready: deployment "openstack-operator-controller-operator" waiting for 1 outdated replica(s) to be terminated | |
openstack-operators |
multus |
openstack-operator-controller-operator-589d7b4556-d9qth |
AddedInterface |
Add eth0 [10.128.0.170/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-589d7b4556-d9qth |
Started |
Started container operator | |
openstack-operators |
kubelet |
openstack-operator-controller-operator-589d7b4556-d9qth |
Created |
Created container: operator | |
openstack-operators |
deployment-controller |
openstack-operator-controller-operator |
ScalingReplicaSet |
Scaled down replica set openstack-operator-controller-operator-55b6fb9447 to 0 from 1 | |
| (x2) | openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.5.0 |
InstallSucceeded |
install strategy completed with no errors |
openstack-operators |
kubelet |
openstack-operator-controller-operator-55b6fb9447-zn55t |
Killing |
Stopping container operator | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-operator-55b6fb9447 |
SuccessfulDelete |
Deleted pod: openstack-operator-controller-operator-55b6fb9447-zn55t | |
openstack-operators |
openstack-operator-controller-operator-589d7b4556-d9qth_2867b1be-75e4-42a7-a028-41521340d8e0 |
20ca801f.openstack.org |
LeaderElection |
openstack-operator-controller-operator-589d7b4556-d9qth_2867b1be-75e4-42a7-a028-41521340d8e0 became leader | |
openstack-operators |
cert-manager-certificates-trigger |
barbican-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
cinder-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-qdqpg" | |
openstack-operators |
cert-manager-certificates-trigger |
designate-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-trigger |
cinder-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-request-manager |
cinder-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "cinder-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
barbican-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-request-manager |
barbican-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "barbican-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
cinder-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-key-manager |
barbican-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-8kzms" | |
openstack-operators |
cert-manager-certificates-trigger |
glance-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
barbican-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
cinder-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
designate-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "designate-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
glance-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "glance-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
glance-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "glance-operator-metrics-certs-h5hxf" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
designate-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "designate-operator-metrics-certs-frtk9" | |
openstack-operators |
cert-manager-certificates-trigger |
heat-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
heat-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "heat-operator-metrics-certs-2r4dc" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
glance-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-trigger |
keystone-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
manila-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
horizon-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-approver |
designate-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-trigger |
ironic-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-trigger |
mariadb-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
neutron-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
nova-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
keystone-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-g8467" | |
openstack-operators |
cert-manager-certificates-key-manager |
manila-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "manila-operator-metrics-certs-vlrnv" | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
ovn-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
octavia-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
placement-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
horizon-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-q4rgq" | |
openstack-operators |
cert-manager-certificates-key-manager |
ironic-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-bqwjn" | |
openstack-operators |
deployment-controller |
telemetry-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set telemetry-operator-controller-manager-7b5867bfc7 to 1 | |
openstack-operators |
cert-manager-certificates-issuing |
designate-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
deployment-controller |
heat-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set heat-operator-controller-manager-7fd96594c7 to 1 | |
openstack-operators |
replicaset-controller |
heat-operator-controller-manager-7fd96594c7 |
SuccessfulCreate |
Created pod: heat-operator-controller-manager-7fd96594c7-xnrjg | |
openstack-operators |
replicaset-controller |
openstack-baremetal-operator-controller-manager-6f998f5746 |
SuccessfulCreate |
Created pod: openstack-baremetal-operator-controller-manager-6f998f5746f9gjr | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
deployment-controller |
infra-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set infra-operator-controller-manager-7d9c9d7fd8 to 1 | |
openstack-operators |
replicaset-controller |
ironic-operator-controller-manager-7c9bfd6967 |
SuccessfulCreate |
Created pod: ironic-operator-controller-manager-7c9bfd6967-782xf | |
openstack-operators |
deployment-controller |
ironic-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ironic-operator-controller-manager-7c9bfd6967 to 1 | |
openstack-operators |
replicaset-controller |
ovn-operator-controller-manager-647f96877 |
SuccessfulCreate |
Created pod: ovn-operator-controller-manager-647f96877-75x24 | |
openstack-operators |
replicaset-controller |
mariadb-operator-controller-manager-647d75769b |
SuccessfulCreate |
Created pod: mariadb-operator-controller-manager-647d75769b-8dqxm | |
openstack-operators |
replicaset-controller |
neutron-operator-controller-manager-7cdd6b54fb |
SuccessfulCreate |
Created pod: neutron-operator-controller-manager-7cdd6b54fb-4jl24 | |
openstack-operators |
deployment-controller |
octavia-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set octavia-operator-controller-manager-845b79dc4f to 1 | |
openstack-operators |
replicaset-controller |
octavia-operator-controller-manager-845b79dc4f |
SuccessfulCreate |
Created pod: octavia-operator-controller-manager-845b79dc4f-rs4fz | |
openstack-operators |
deployment-controller |
neutron-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set neutron-operator-controller-manager-7cdd6b54fb to 1 | |
openstack-operators |
deployment-controller |
ovn-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ovn-operator-controller-manager-647f96877 to 1 | |
openstack-operators |
deployment-controller |
openstack-baremetal-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-baremetal-operator-controller-manager-6f998f5746 to 1 | |
openstack-operators |
deployment-controller |
mariadb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set mariadb-operator-controller-manager-647d75769b to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
telemetry-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-request-manager |
heat-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "heat-operator-metrics-certs-1" | |
openstack-operators |
deployment-controller |
designate-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set designate-operator-controller-manager-84bc9f68f5 to 1 | |
openstack-operators |
deployment-controller |
cinder-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set cinder-operator-controller-manager-f8856dd79 to 1 | |
openstack-operators |
replicaset-controller |
infra-operator-controller-manager-7d9c9d7fd8 |
SuccessfulCreate |
Created pod: infra-operator-controller-manager-7d9c9d7fd8-f228s | |
openstack-operators |
replicaset-controller |
horizon-operator-controller-manager-f6cc97788 |
SuccessfulCreate |
Created pod: horizon-operator-controller-manager-f6cc97788-dtxsw | |
openstack-operators |
deployment-controller |
horizon-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set horizon-operator-controller-manager-f6cc97788 to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
test-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
replicaset-controller |
telemetry-operator-controller-manager-7b5867bfc7 |
SuccessfulCreate |
Created pod: telemetry-operator-controller-manager-7b5867bfc7-tfj67 | |
openstack-operators |
replicaset-controller |
designate-operator-controller-manager-84bc9f68f5 |
SuccessfulCreate |
Created pod: designate-operator-controller-manager-84bc9f68f5-jjlq7 | |
openstack-operators |
replicaset-controller |
keystone-operator-controller-manager-58b8dcc5fb |
SuccessfulCreate |
Created pod: keystone-operator-controller-manager-58b8dcc5fb-bpmdw | |
openstack-operators |
deployment-controller |
keystone-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set keystone-operator-controller-manager-58b8dcc5fb to 1 | |
openstack-operators |
replicaset-controller |
barbican-operator-controller-manager-5cd89994b5 |
SuccessfulCreate |
Created pod: barbican-operator-controller-manager-5cd89994b5-974hd | |
openstack-operators |
deployment-controller |
barbican-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set barbican-operator-controller-manager-5cd89994b5 to 1 | |
openstack-operators |
deployment-controller |
nova-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set nova-operator-controller-manager-865fc86d5b to 1 | |
openstack-operators |
replicaset-controller |
nova-operator-controller-manager-865fc86d5b |
SuccessfulCreate |
Created pod: nova-operator-controller-manager-865fc86d5b-fd78q | |
openstack-operators |
cert-manager-certificates-trigger |
swift-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
deployment-controller |
swift-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set swift-operator-controller-manager-696b999796 to 1 | |
openstack-operators |
replicaset-controller |
swift-operator-controller-manager-696b999796 |
SuccessfulCreate |
Created pod: swift-operator-controller-manager-696b999796-p6w8q | |
openstack-operators |
deployment-controller |
glance-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set glance-operator-controller-manager-78cd4f7769 to 1 | |
openstack-operators |
deployment-controller |
manila-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set manila-operator-controller-manager-56f9fbf74b to 1 | |
openstack-operators |
replicaset-controller |
manila-operator-controller-manager-56f9fbf74b |
SuccessfulCreate |
Created pod: manila-operator-controller-manager-56f9fbf74b-hq5jr | |
openstack-operators |
replicaset-controller |
glance-operator-controller-manager-78cd4f7769 |
SuccessfulCreate |
Created pod: glance-operator-controller-manager-78cd4f7769-58c6v | |
openstack-operators |
deployment-controller |
placement-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set placement-operator-controller-manager-6b64f6f645 to 1 | |
openstack-operators |
replicaset-controller |
placement-operator-controller-manager-6b64f6f645 |
SuccessfulCreate |
Created pod: placement-operator-controller-manager-6b64f6f645-rddjv | |
openstack-operators |
cert-manager-certificates-key-manager |
mariadb-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-gxngr" | |
openstack-operators |
replicaset-controller |
cinder-operator-controller-manager-f8856dd79 |
SuccessfulCreate |
Created pod: cinder-operator-controller-manager-f8856dd79-mfhwn | |
openstack-operators |
deployment-controller |
watcher-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set watcher-operator-controller-manager-6b9b669fdb to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
replicaset-controller |
test-operator-controller-manager-57dfcdd5b8 |
SuccessfulCreate |
Created pod: test-operator-controller-manager-57dfcdd5b8-twtzz | |
openstack-operators |
deployment-controller |
test-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set test-operator-controller-manager-57dfcdd5b8 to 1 | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
replicaset-controller |
watcher-operator-controller-manager-6b9b669fdb |
SuccessfulCreate |
Created pod: watcher-operator-controller-manager-6b9b669fdb-jsphj | |
openstack-operators |
cert-manager-certificaterequests-approver |
heat-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
horizon-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
placement-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "placement-operator-metrics-certs-jlkjz" | |
openstack-operators |
cert-manager-certificates-key-manager |
octavia-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-ghtwq" | |
openstack-operators |
multus |
barbican-operator-controller-manager-5cd89994b5-974hd |
AddedInterface |
Add eth0 [10.128.0.171/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-request-manager |
horizon-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "horizon-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
glance-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-key-manager |
neutron-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-zhnk5" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
nova-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "nova-operator-metrics-certs-lr5pl" | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-5cd89994b5-974hd |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-manager-599cfccd85 |
SuccessfulCreate |
Created pod: openstack-operator-controller-manager-599cfccd85-dgvwj | |
openstack-operators |
deployment-controller |
openstack-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-manager-599cfccd85 to 1 | |
openstack-operators |
multus |
heat-operator-controller-manager-7fd96594c7-xnrjg |
AddedInterface |
Add eth0 [10.128.0.175/23] from ovn-kubernetes | |
openstack-operators |
multus |
telemetry-operator-controller-manager-7b5867bfc7-tfj67 |
AddedInterface |
Add eth0 [10.128.0.189/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
multus |
ovn-operator-controller-manager-647f96877-75x24 |
AddedInterface |
Add eth0 [10.128.0.186/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-key-manager |
ovn-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-4lxtm" | |
openstack-operators |
multus |
placement-operator-controller-manager-6b64f6f645-rddjv |
AddedInterface |
Add eth0 [10.128.0.187/23] from ovn-kubernetes | |
openstack-operators |
deployment-controller |
rabbitmq-cluster-operator-manager |
ScalingReplicaSet |
Scaled up replica set rabbitmq-cluster-operator-manager-78955d896f to 1 | |
openstack-operators |
multus |
neutron-operator-controller-manager-7cdd6b54fb-4jl24 |
AddedInterface |
Add eth0 [10.128.0.182/23] from ovn-kubernetes | |
openstack-operators |
multus |
manila-operator-controller-manager-56f9fbf74b-hq5jr |
AddedInterface |
Add eth0 [10.128.0.180/23] from ovn-kubernetes | |
openstack-operators |
multus |
test-operator-controller-manager-57dfcdd5b8-twtzz |
AddedInterface |
Add eth0 [10.128.0.190/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-key-manager |
telemetry-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-tjkgx" | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-f8856dd79-mfhwn |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801" | |
openstack-operators |
multus |
cinder-operator-controller-manager-f8856dd79-mfhwn |
AddedInterface |
Add eth0 [10.128.0.172/23] from ovn-kubernetes | |
openstack-operators |
multus |
mariadb-operator-controller-manager-647d75769b-8dqxm |
AddedInterface |
Add eth0 [10.128.0.181/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-issuing |
heat-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
multus |
ironic-operator-controller-manager-7c9bfd6967-782xf |
AddedInterface |
Add eth0 [10.128.0.178/23] from ovn-kubernetes | |
openstack-operators |
multus |
horizon-operator-controller-manager-f6cc97788-dtxsw |
AddedInterface |
Add eth0 [10.128.0.176/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
manila-operator-controller-manager-56f9fbf74b-hq5jr |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9" | |
openstack-operators |
multus |
octavia-operator-controller-manager-845b79dc4f-rs4fz |
AddedInterface |
Add eth0 [10.128.0.184/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-f6cc97788-dtxsw |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" | |
openstack-operators |
multus |
glance-operator-controller-manager-78cd4f7769-58c6v |
AddedInterface |
Add eth0 [10.128.0.174/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
glance-operator-controller-manager-78cd4f7769-58c6v |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809" | |
openstack-operators |
multus |
swift-operator-controller-manager-696b999796-p6w8q |
AddedInterface |
Add eth0 [10.128.0.188/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
designate-operator-controller-manager-84bc9f68f5-jjlq7 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85" | |
openstack-operators |
multus |
designate-operator-controller-manager-84bc9f68f5-jjlq7 |
AddedInterface |
Add eth0 [10.128.0.173/23] from ovn-kubernetes | |
openstack-operators |
multus |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw |
AddedInterface |
Add eth0 [10.128.0.179/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-j9vw2" | |
openstack-operators |
cert-manager-certificates-key-manager |
swift-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "swift-operator-metrics-certs-cgqjm" | |
openstack-operators |
multus |
nova-operator-controller-manager-865fc86d5b-fd78q |
AddedInterface |
Add eth0 [10.128.0.183/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7b5867bfc7-tfj67 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385" | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-647d75769b-8dqxm |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7" | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-jsphj |
Failed |
Failed to pull image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0": pull QPS exceeded | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1" | |
openstack-operators |
kubelet |
swift-operator-controller-manager-696b999796-p6w8q |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d" | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-jsphj |
Failed |
Error: ErrImagePull | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw |
Failed |
Failed to pull image "quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7": pull QPS exceeded | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw |
Failed |
Error: ErrImagePull | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-4jl24 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-rs4fz |
Failed |
Failed to pull image "quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168": pull QPS exceeded | |
openstack-operators |
kubelet |
heat-operator-controller-manager-7fd96594c7-xnrjg |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" | |
openstack-operators |
kubelet |
nova-operator-controller-manager-865fc86d5b-fd78q |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" | |
openstack-operators |
kubelet |
test-operator-controller-manager-57dfcdd5b8-twtzz |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" | |
openstack-operators |
cert-manager-certificates-key-manager |
test-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "test-operator-metrics-certs-6pfmb" | |
openstack-operators |
multus |
watcher-operator-controller-manager-6b9b669fdb-jsphj |
AddedInterface |
Add eth0 [10.128.0.191/23] from ovn-kubernetes | |
openstack-operators |
replicaset-controller |
rabbitmq-cluster-operator-manager-78955d896f |
SuccessfulCreate |
Created pod: rabbitmq-cluster-operator-manager-78955d896f-94dl9 | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw |
Failed |
Failed to pull image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0": pull QPS exceeded | |
openstack-operators |
cert-manager-certificates-issuing |
horizon-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-rddjv |
Failed |
Error: ErrImagePull | |
openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-rddjv |
Failed |
Failed to pull image "quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f": pull QPS exceeded | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-7c9bfd6967-782xf |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-jsphj |
Failed |
Failed to pull image "quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621": pull QPS exceeded | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-647f96877-75x24 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-jsphj |
Failed |
Error: ErrImagePull | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-rs4fz |
Failed |
Error: ErrImagePull | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw |
Failed |
Error: ErrImagePull | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
keystone-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "keystone-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-rs4fz |
Failed |
Error: ImagePullBackOff |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-rs4fz |
BackOff |
Back-off pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" |
openstack-operators |
cert-manager-certificates-request-manager |
telemetry-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "telemetry-operator-metrics-certs-1" | |
| (x2) | openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw |
Failed |
Error: ImagePullBackOff |
| (x2) | openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw |
BackOff |
Back-off pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" |
openstack-operators |
cert-manager-certificaterequests-approver |
telemetry-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "infra-operator-serving-cert-l67rb" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
ironic-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ironic-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-rddjv |
BackOff |
Back-off pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" |
| (x2) | openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-rddjv |
Failed |
Error: ImagePullBackOff |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
| (x2) | openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-jsphj |
Failed |
Error: ImagePullBackOff |
| (x2) | openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-jsphj |
BackOff |
Back-off pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" |
| (x2) | openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw |
Failed |
Error: ImagePullBackOff |
| (x2) | openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw |
BackOff |
Back-off pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" |
| (x2) | openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-jsphj |
Failed |
Error: ImagePullBackOff |
| (x2) | openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-jsphj |
BackOff |
Back-off pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621" |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
test-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "test-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
ironic-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-serving-cert |
Requested |
Created new CertificateRequest resource "infra-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
manila-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "manila-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-9xt4r" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
keystone-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
test-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
manila-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
| (x5) | openstack-operators |
kubelet |
infra-operator-controller-manager-7d9c9d7fd8-f228s |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-request-manager |
mariadb-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "mariadb-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-whwxk" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x5) | openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-6f998f5746f9gjr |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
nova-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "nova-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-serving-cert-rhvkq" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
neutron-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "neutron-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
telemetry-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
neutron-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-request-manager |
octavia-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "octavia-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
nova-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
mariadb-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
keystone-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
octavia-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-request-manager |
placement-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "placement-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
ironic-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x5) | openstack-operators |
kubelet |
openstack-operator-controller-manager-599cfccd85-dgvwj |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found |
openstack-operators |
cert-manager-certificates-request-manager |
swift-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "swift-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x5) | openstack-operators |
kubelet |
openstack-operator-controller-manager-599cfccd85-dgvwj |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-request-manager |
ovn-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ovn-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
test-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-approver |
placement-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
swift-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
manila-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
multus |
rabbitmq-cluster-operator-manager-78955d896f-94dl9 |
AddedInterface |
Add eth0 [10.128.0.193/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
ovn-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
nova-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
mariadb-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
octavia-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
| (x2) | openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-rddjv |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" |
openstack-operators |
cert-manager-certificates-issuing |
neutron-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-78955d896f-94dl9 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" | |
| (x2) | openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
ovn-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
swift-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
| (x2) | openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-rs4fz |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" |
| (x2) | openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-jsphj |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621" |
openstack-operators |
cert-manager-certificates-issuing |
placement-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-f6cc97788-dtxsw |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" in 17.326s (17.326s including waiting). Image size: 189868493 bytes. | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-5cd89994b5-974hd |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" in 18.527s (18.527s including waiting). Image size: 190758360 bytes. | |
openstack-operators |
kubelet |
glance-operator-controller-manager-78cd4f7769-58c6v |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809" in 46.986s (46.986s including waiting). Image size: 191652289 bytes. | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-647d75769b-8dqxm |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7" in 46.451s (46.451s including waiting). Image size: 189260496 bytes. | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-f8856dd79-mfhwn |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801" in 47.269s (47.269s including waiting). Image size: 191083456 bytes. | |
openstack-operators |
kubelet |
designate-operator-controller-manager-84bc9f68f5-jjlq7 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85" in 46.958s (46.959s including waiting). Image size: 194596839 bytes. | |
openstack-operators |
kubelet |
manila-operator-controller-manager-56f9fbf74b-hq5jr |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9" in 48.445s (48.445s including waiting). Image size: 190919617 bytes. | |
openstack-operators |
kubelet |
test-operator-controller-manager-57dfcdd5b8-twtzz |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" in 47.93s (47.931s including waiting). Image size: 188866491 bytes. | |
openstack-operators |
kubelet |
heat-operator-controller-manager-7fd96594c7-xnrjg |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" in 52.467s (52.467s including waiting). Image size: 191230375 bytes. | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7b5867bfc7-tfj67 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385" in 52.927s (52.927s including waiting). Image size: 195747812 bytes. | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-647f96877-75x24 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" in 52.315s (52.315s including waiting). Image size: 190094746 bytes. | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-4jl24 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" in 52.627s (52.627s including waiting). Image size: 190697931 bytes. | |
openstack-operators |
kubelet |
swift-operator-controller-manager-696b999796-p6w8q |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d" in 51.749s (51.749s including waiting). Image size: 191790512 bytes. | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-7c9bfd6967-782xf |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" in 52.614s (52.614s including waiting). Image size: 191302081 bytes. | |
openstack-operators |
multus |
openstack-baremetal-operator-controller-manager-6f998f5746f9gjr |
AddedInterface |
Add eth0 [10.128.0.185/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-rs4fz |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" in 40.102s (40.102s including waiting). Image size: 192837582 bytes. | |
openstack-operators |
kubelet |
nova-operator-controller-manager-865fc86d5b-fd78q |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" in 54.838s (54.838s including waiting). Image size: 193269376 bytes. | |
openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-rddjv |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" in 42.037s (42.037s including waiting). Image size: 190053350 bytes. | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-5cd89994b5-974hd |
Created |
Created container: manager | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-78955d896f-94dl9 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 41.772s (41.772s including waiting). Image size: 176351298 bytes. | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-jsphj |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621" in 40.135s (40.135s including waiting). Image size: 177172942 bytes. | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-6f998f5746f9gjr |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81" | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-5cd89994b5-974hd |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" in 41.395s (41.395s including waiting). Image size: 192218533 bytes. | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7d9c9d7fd8-f228s |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7" | |
openstack-operators |
multus |
infra-operator-controller-manager-7d9c9d7fd8-f228s |
AddedInterface |
Add eth0 [10.128.0.177/23] from ovn-kubernetes | |
openstack-operators |
cinder-operator-controller-manager-f8856dd79-mfhwn_26dc4630-7aca-4f5f-ad0e-d5f522188a40 |
a6b6a260.openstack.org |
LeaderElection |
cinder-operator-controller-manager-f8856dd79-mfhwn_26dc4630-7aca-4f5f-ad0e-d5f522188a40 became leader | |
openstack-operators |
kubelet |
test-operator-controller-manager-57dfcdd5b8-twtzz |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-647d75769b-8dqxm |
Created |
Created container: manager | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-647d75769b-8dqxm |
Started |
Started container manager | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-647d75769b-8dqxm |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-5cd89994b5-974hd |
Started |
Started container manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-f6cc97788-dtxsw |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-f6cc97788-dtxsw |
Started |
Started container manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-f6cc97788-dtxsw |
Created |
Created container: manager | |
openstack-operators |
kubelet |
heat-operator-controller-manager-7fd96594c7-xnrjg |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
barbican-operator-controller-manager-5cd89994b5-974hd_095bce3a-264f-4f9a-a17b-2587d9f6b242 |
8cc931b9.openstack.org |
LeaderElection |
barbican-operator-controller-manager-5cd89994b5-974hd_095bce3a-264f-4f9a-a17b-2587d9f6b242 became leader | |
openstack-operators |
multus |
openstack-operator-controller-manager-599cfccd85-dgvwj |
AddedInterface |
Add eth0 [10.128.0.192/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-599cfccd85-dgvwj |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:a930bf4711e92a6bdc8a5ddb01a63d3a647a7db5f9ddd19bc897cb74292b8365" already present on machine | |
openstack-operators |
heat-operator-controller-manager-7fd96594c7-xnrjg_527248ad-7381-4c0d-a0a2-1002d10c6fc0 |
c3c8b535.openstack.org |
LeaderElection |
heat-operator-controller-manager-7fd96594c7-xnrjg_527248ad-7381-4c0d-a0a2-1002d10c6fc0 became leader | |
openstack-operators |
glance-operator-controller-manager-78cd4f7769-58c6v_92a6c79e-415a-47a9-936c-4bd032367513 |
c569355b.openstack.org |
LeaderElection |
glance-operator-controller-manager-78cd4f7769-58c6v_92a6c79e-415a-47a9-936c-4bd032367513 became leader | |
openstack-operators |
kubelet |
heat-operator-controller-manager-7fd96594c7-xnrjg |
Started |
Started container manager | |
openstack-operators |
test-operator-controller-manager-57dfcdd5b8-twtzz_9c1bfe94-f580-4782-89bb-5faff24ee8ba |
6cce095b.openstack.org |
LeaderElection |
test-operator-controller-manager-57dfcdd5b8-twtzz_9c1bfe94-f580-4782-89bb-5faff24ee8ba became leader | |
openstack-operators |
kubelet |
heat-operator-controller-manager-7fd96594c7-xnrjg |
Created |
Created container: manager | |
openstack-operators |
horizon-operator-controller-manager-f6cc97788-dtxsw_2c12f3c1-1fa1-42b6-9393-d50fda778354 |
5ad2eba0.openstack.org |
LeaderElection |
horizon-operator-controller-manager-f6cc97788-dtxsw_2c12f3c1-1fa1-42b6-9393-d50fda778354 became leader | |
openstack-operators |
kubelet |
glance-operator-controller-manager-78cd4f7769-58c6v |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
manila-operator-controller-manager-56f9fbf74b-hq5jr |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
manila-operator-controller-manager-56f9fbf74b-hq5jr |
Started |
Started container manager | |
openstack-operators |
kubelet |
manila-operator-controller-manager-56f9fbf74b-hq5jr |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-647f96877-75x24 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-647f96877-75x24 |
Started |
Started container manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-f8856dd79-mfhwn |
Created |
Created container: manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-78cd4f7769-58c6v |
Started |
Started container manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-78cd4f7769-58c6v |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-647f96877-75x24 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
swift-operator-controller-manager-696b999796-p6w8q |
Created |
Created container: manager | |
openstack-operators |
designate-operator-controller-manager-84bc9f68f5-jjlq7_c03d65a0-a60c-4e3e-aedf-13dd12a451c8 |
f9497e05.openstack.org |
LeaderElection |
designate-operator-controller-manager-84bc9f68f5-jjlq7_c03d65a0-a60c-4e3e-aedf-13dd12a451c8 became leader | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-f8856dd79-mfhwn |
Started |
Started container manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-f8856dd79-mfhwn |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
designate-operator-controller-manager-84bc9f68f5-jjlq7 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-84bc9f68f5-jjlq7 |
Started |
Started container manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-84bc9f68f5-jjlq7 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw_cde702f6-0418-4b9c-a70b-41625037b14a |
6012128b.openstack.org |
LeaderElection |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw_cde702f6-0418-4b9c-a70b-41625037b14a became leader | |
openstack-operators |
swift-operator-controller-manager-696b999796-p6w8q_aeab7586-8320-4811-9d1f-6f969d912470 |
83821f12.openstack.org |
LeaderElection |
swift-operator-controller-manager-696b999796-p6w8q_aeab7586-8320-4811-9d1f-6f969d912470 became leader | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw |
Started |
Started container manager | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-78955d896f-94dl9 |
Created |
Created container: operator | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-78955d896f-94dl9 |
Started |
Started container operator | |
openstack-operators |
nova-operator-controller-manager-865fc86d5b-fd78q_3f1247a3-06e3-4c36-a9eb-5a91e489e9fe |
f33036c1.openstack.org |
LeaderElection |
nova-operator-controller-manager-865fc86d5b-fd78q_3f1247a3-06e3-4c36-a9eb-5a91e489e9fe became leader | |
openstack-operators |
ironic-operator-controller-manager-7c9bfd6967-782xf_c1ae211b-270e-4698-8c8e-d150da0d1bd1 |
f92b5c2d.openstack.org |
LeaderElection |
ironic-operator-controller-manager-7c9bfd6967-782xf_c1ae211b-270e-4698-8c8e-d150da0d1bd1 became leader | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-4jl24 |
Failed |
Error: ErrImagePull | |
| (x2) | openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" |
openstack-operators |
telemetry-operator-controller-manager-7b5867bfc7-tfj67_037a2b61-ddb4-467e-9bf1-60a9e4ad9c0e |
fa1814a2.openstack.org |
LeaderElection |
telemetry-operator-controller-manager-7b5867bfc7-tfj67_037a2b61-ddb4-467e-9bf1-60a9e4ad9c0e became leader | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-4jl24 |
Failed |
Failed to pull image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0": pull QPS exceeded | |
openstack-operators |
kubelet |
swift-operator-controller-manager-696b999796-p6w8q |
Started |
Started container manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-4jl24 |
Started |
Started container manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-4jl24 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-rddjv |
Started |
Started container manager | |
openstack-operators |
kubelet |
swift-operator-controller-manager-696b999796-p6w8q |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
| (x2) | openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-rddjv |
Failed |
Failed to pull image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0": pull QPS exceeded |
openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-rddjv |
Created |
Created container: manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-865fc86d5b-fd78q |
Created |
Created container: manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-865fc86d5b-fd78q |
Started |
Started container manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-865fc86d5b-fd78q |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-599cfccd85-dgvwj |
Started |
Started container manager | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-599cfccd85-dgvwj |
Created |
Created container: manager | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw |
Created |
Created container: manager | |
openstack-operators |
manila-operator-controller-manager-56f9fbf74b-hq5jr_7688da24-9584-466d-9f31-ebe7cc8f8860 |
858862a7.openstack.org |
LeaderElection |
manila-operator-controller-manager-56f9fbf74b-hq5jr_7688da24-9584-466d-9f31-ebe7cc8f8860 became leader | |
openstack-operators |
ovn-operator-controller-manager-647f96877-75x24_7d657c03-cb23-4047-8862-c717f0506ced |
90840a60.openstack.org |
LeaderElection |
ovn-operator-controller-manager-647f96877-75x24_7d657c03-cb23-4047-8862-c717f0506ced became leader | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-7c9bfd6967-782xf |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7b5867bfc7-tfj67 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7b5867bfc7-tfj67 |
Started |
Started container manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7b5867bfc7-tfj67 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" | |
| (x2) | openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-rs4fz |
Failed |
Failed to pull image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0": pull QPS exceeded |
| (x2) | openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-rs4fz |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" |
| (x2) | openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-rs4fz |
Failed |
Error: ErrImagePull |
| (x2) | openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-rddjv |
Failed |
Error: ErrImagePull |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-jsphj |
Started |
Started container manager | |
| (x2) | openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-rddjv |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" |
openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-rs4fz |
Started |
Started container manager | |
| (x2) | openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-jsphj |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" |
openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-rs4fz |
Created |
Created container: manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-jsphj |
Created |
Created container: manager | |
openstack-operators |
octavia-operator-controller-manager-845b79dc4f-rs4fz_77395f52-9434-42f2-b43b-5c52d344a906 |
98809e87.openstack.org |
LeaderElection |
octavia-operator-controller-manager-845b79dc4f-rs4fz_77395f52-9434-42f2-b43b-5c52d344a906 became leader | |
| (x4) | openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-rs4fz |
Failed |
Error: ImagePullBackOff |
| (x4) | openstack-operators |
kubelet |
octavia-operator-controller-manager-845b79dc4f-rs4fz |
BackOff |
Back-off pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" |
| (x4) | openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-rddjv |
BackOff |
Back-off pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" |
| (x4) | openstack-operators |
kubelet |
placement-operator-controller-manager-6b64f6f645-rddjv |
Failed |
Error: ImagePullBackOff |
openstack-operators |
placement-operator-controller-manager-6b64f6f645-rddjv_cd33cdd4-ed6c-4503-beb1-7dde537a2dbe |
73d6b7ce.openstack.org |
LeaderElection |
placement-operator-controller-manager-6b64f6f645-rddjv_cd33cdd4-ed6c-4503-beb1-7dde537a2dbe became leader | |
openstack-operators |
rabbitmq-cluster-operator-manager-78955d896f-94dl9_e3210dd3-bb8b-4739-a79b-24875f513365 |
rabbitmq-cluster-operator-leader-election |
LeaderElection |
rabbitmq-cluster-operator-manager-78955d896f-94dl9_e3210dd3-bb8b-4739-a79b-24875f513365 became leader | |
openstack-operators |
neutron-operator-controller-manager-7cdd6b54fb-4jl24_c62f0e3d-c765-40db-9bb0-913b065d2402 |
972c7522.openstack.org |
LeaderElection |
neutron-operator-controller-manager-7cdd6b54fb-4jl24_c62f0e3d-c765-40db-9bb0-913b065d2402 became leader | |
openstack-operators |
mariadb-operator-controller-manager-647d75769b-8dqxm_924b6383-8485-4614-b5f3-1d1392a9594f |
7c2a6c6b.openstack.org |
LeaderElection |
mariadb-operator-controller-manager-647d75769b-8dqxm_924b6383-8485-4614-b5f3-1d1392a9594f became leader | |
openstack-operators |
watcher-operator-controller-manager-6b9b669fdb-jsphj_2da50233-2a14-41e0-b7d1-8791fd77025b |
5049980f.openstack.org |
LeaderElection |
watcher-operator-controller-manager-6b9b669fdb-jsphj_2da50233-2a14-41e0-b7d1-8791fd77025b became leader | |
openstack-operators |
openstack-operator-controller-manager-599cfccd85-dgvwj_c22f9522-4fb7-4a44-95de-368638d70537 |
40ba705e.openstack.org |
LeaderElection |
openstack-operator-controller-manager-599cfccd85-dgvwj_c22f9522-4fb7-4a44-95de-368638d70537 became leader | |
| (x3) | openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-4jl24 |
Failed |
Error: ImagePullBackOff |
| (x3) | openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-4jl24 |
BackOff |
Back-off pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" |
| (x2) | openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-4jl24 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-6f998f5746f9gjr |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81" in 22.673s (22.673s including waiting). Image size: 190602344 bytes. | |
metallb-system |
kubelet |
metallb-operator-controller-manager-6f8cddc44c-f979v |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.151:8080/readyz": dial tcp 10.128.0.151:8080: connect: connection refused | |
openstack-operators |
kubelet |
test-operator-controller-manager-57dfcdd5b8-twtzz |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 38.363s (38.363s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7b5867bfc7-tfj67 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 38.484s (38.484s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-7c9bfd6967-782xf |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 38.48s (38.48s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 38.386s (38.387s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-4jl24 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 18.069s (18.069s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
heat-operator-controller-manager-7fd96594c7-xnrjg |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 38.93s (38.93s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-647f96877-75x24 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 38.895s (38.895s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7d9c9d7fd8-f228s |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7" in 38.861s (38.861s including waiting). Image size: 179448753 bytes. | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-647d75769b-8dqxm |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 39.331s (39.331s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
heat-operator-controller-manager-7fd96594c7-xnrjg |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-6f998f5746f9gjr |
Started |
Started container manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7d9c9d7fd8-f228s |
Created |
Created container: manager | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7d9c9d7fd8-f228s |
Started |
Started container manager | |
openstack-operators |
kubelet |
test-operator-controller-manager-57dfcdd5b8-twtzz |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-647f96877-75x24 |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7d9c9d7fd8-f228s |
Pulled |
Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" already present on machine | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-6f998f5746f9gjr |
Created |
Created container: manager | |
openstack-operators |
kubelet |
heat-operator-controller-manager-7fd96594c7-xnrjg |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-6f998f5746f9gjr |
Pulled |
Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" already present on machine | |
openstack-operators |
infra-operator-controller-manager-7d9c9d7fd8-f228s_30b84052-9d29-4632-8571-c2c8cc21287f |
c8c223a1.openstack.org |
LeaderElection |
infra-operator-controller-manager-7d9c9d7fd8-f228s_30b84052-9d29-4632-8571-c2c8cc21287f became leader | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-f8856dd79-mfhwn |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 39.851s (39.851s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
openstack-baremetal-operator-controller-manager-6f998f5746f9gjr_077be49d-20b7-403b-8fb4-a552f0d5020d |
dedc2245.openstack.org |
LeaderElection |
openstack-baremetal-operator-controller-manager-6f998f5746f9gjr_077be49d-20b7-403b-8fb4-a552f0d5020d became leader | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-647f96877-75x24 |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
swift-operator-controller-manager-696b999796-p6w8q |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 39.314s (39.314s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-58b8dcc5fb-bpmdw |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
test-operator-controller-manager-57dfcdd5b8-twtzz |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7b5867bfc7-tfj67 |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-7c9bfd6967-782xf |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-4jl24 |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
designate-operator-controller-manager-84bc9f68f5-jjlq7 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 40.857s (40.857s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
nova-operator-controller-manager-865fc86d5b-fd78q |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 40.166s (40.167s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-f8856dd79-mfhwn |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-f8856dd79-mfhwn |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-6f998f5746f9gjr |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
manila-operator-controller-manager-56f9fbf74b-hq5jr |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 41.224s (41.224s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-6f998f5746f9gjr |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-5cd89994b5-974hd |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 41.431s (41.431s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-7c9bfd6967-782xf |
Pulled |
Container image "quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" already present on machine | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-7c9bfd6967-782xf |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-jsphj |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 39.436s (39.436s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7d9c9d7fd8-f228s |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
glance-operator-controller-manager-78cd4f7769-58c6v |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 41.345s (41.345s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
infra-operator-controller-manager-7d9c9d7fd8-f228s |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7b5867bfc7-tfj67 |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-7cdd6b54fb-4jl24 |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-f6cc97788-dtxsw |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 40.92s (40.92s including waiting). Image size: 68421467 bytes. | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-f6cc97788-dtxsw |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
test-operator-controller-manager-57dfcdd5b8-twtzz |
Pulled |
Container image "quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" already present on machine | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-f6cc97788-dtxsw |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-647d75769b-8dqxm |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-647d75769b-8dqxm |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-jsphj |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
swift-operator-controller-manager-696b999796-p6w8q |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-6b9b669fdb-jsphj |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
swift-operator-controller-manager-696b999796-p6w8q |
Created |
Created container: kube-rbac-proxy | |
| (x2) | openstack-operators |
kubelet |
ironic-operator-controller-manager-7c9bfd6967-782xf |
Started |
Started container manager |
openstack-operators |
kubelet |
designate-operator-controller-manager-84bc9f68f5-jjlq7 |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
manila-operator-controller-manager-56f9fbf74b-hq5jr |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-5cd89994b5-974hd |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-5cd89994b5-974hd |
Started |
Started container kube-rbac-proxy | |
| (x2) | openstack-operators |
kubelet |
ironic-operator-controller-manager-7c9bfd6967-782xf |
Created |
Created container: manager |
openstack-operators |
kubelet |
designate-operator-controller-manager-84bc9f68f5-jjlq7 |
Created |
Created container: kube-rbac-proxy | |
| (x2) | openstack-operators |
kubelet |
test-operator-controller-manager-57dfcdd5b8-twtzz |
Started |
Started container manager |
| (x2) | openstack-operators |
kubelet |
test-operator-controller-manager-57dfcdd5b8-twtzz |
Created |
Created container: manager |
openstack-operators |
kubelet |
manila-operator-controller-manager-56f9fbf74b-hq5jr |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
nova-operator-controller-manager-865fc86d5b-fd78q |
Started |
Started container kube-rbac-proxy | |
openstack-operators |
kubelet |
glance-operator-controller-manager-78cd4f7769-58c6v |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
nova-operator-controller-manager-865fc86d5b-fd78q |
Created |
Created container: kube-rbac-proxy | |
openstack-operators |
kubelet |
glance-operator-controller-manager-78cd4f7769-58c6v |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
metallb-system |
metallb-operator-controller-manager-6f8cddc44c-f979v_a7077371-2adc-4047-a16a-c41fb219f5e9 |
metallb.io.metallboperator |
LeaderElection |
metallb-operator-controller-manager-6f8cddc44c-f979v_a7077371-2adc-4047-a16a-c41fb219f5e9 became leader | |
openstack-operators |
ironic-operator-controller-manager-7c9bfd6967-782xf_4a2459f5-d4d2-44dd-9e28-da405cd9e4d6 |
f92b5c2d.openstack.org |
LeaderElection |
ironic-operator-controller-manager-7c9bfd6967-782xf_4a2459f5-d4d2-44dd-9e28-da405cd9e4d6 became leader | |
openstack-operators |
test-operator-controller-manager-57dfcdd5b8-twtzz_aa5eb1b1-f9c5-4e5c-88fd-08a82e31d867 |
6cce095b.openstack.org |
LeaderElection |
test-operator-controller-manager-57dfcdd5b8-twtzz_aa5eb1b1-f9c5-4e5c-88fd-08a82e31d867 became leader | |
openstack-operators |
octavia-operator-controller-manager-845b79dc4f-rs4fz_33a6b0ba-00dc-4b6c-8896-c1a187a345e8 |
98809e87.openstack.org |
LeaderElection |
octavia-operator-controller-manager-845b79dc4f-rs4fz_33a6b0ba-00dc-4b6c-8896-c1a187a345e8 became leader | |
openshift-marketplace |
multus |
certified-operators-knvdv |
AddedInterface |
Add eth0 [10.128.0.194/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-knvdv |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-knvdv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-knvdv |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-knvdv |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-knvdv |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-knvdv |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-knvdv |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 1.315s (1.315s including waiting). Image size: 1205106509 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-knvdv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
certified-operators-knvdv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 384ms (384ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-knvdv |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-knvdv |
Created |
Created container: registry-server | |
openshift-marketplace |
multus |
redhat-operators-826mk |
AddedInterface |
Add eth0 [10.128.0.198/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-826mk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-826mk |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-826mk |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-knvdv |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-826mk |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-826mk |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 24.624s (24.624s including waiting). Image size: 1610175307 bytes. | |
openshift-marketplace |
multus |
community-operators-4gtrh |
AddedInterface |
Add eth0 [10.128.0.206/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-4gtrh |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-826mk |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-826mk |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-4gtrh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-4gtrh |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-826mk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
community-operators-4gtrh |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-826mk |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-826mk |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 6.756s (6.756s including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
community-operators-4gtrh |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-826mk |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-4gtrh |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-4gtrh |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 6.841s (6.841s including waiting). Image size: 1201434959 bytes. | |
openshift-marketplace |
kubelet |
community-operators-4gtrh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
community-operators-4gtrh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 448ms (448ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
community-operators-4gtrh |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-4gtrh |
Started |
Started container registry-server | |
| (x2) | metallb-system |
kubelet |
metallb-operator-controller-manager-6f8cddc44c-f979v |
BackOff |
Back-off restarting failed container manager in pod metallb-operator-controller-manager-6f8cddc44c-f979v_metallb-system(70729025-94a7-4a9b-98b8-e68b73d59a3e) |
openshift-marketplace |
multus |
redhat-marketplace-9mxhg |
AddedInterface |
Add eth0 [10.128.0.210/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-9mxhg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-9mxhg |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-9mxhg |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-4gtrh |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-9mxhg |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-826mk |
Killing |
Stopping container registry-server | |
| (x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202511181540 |
ComponentUnhealthy |
installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability. |
openshift-marketplace |
kubelet |
redhat-marketplace-9mxhg |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 840ms (840ms including waiting). Image size: 1129027903 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-9mxhg |
Created |
Created container: extract-content | |
| (x2) | metallb-system |
kubelet |
metallb-operator-controller-manager-6f8cddc44c-f979v |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:113daf5589fc8d963b942a3ab0fc20408aa6ed44e34019539e0e3252bb11297a" already present on machine |
openshift-marketplace |
kubelet |
redhat-marketplace-9mxhg |
Started |
Started container extract-content | |
| (x3) | metallb-system |
kubelet |
metallb-operator-controller-manager-6f8cddc44c-f979v |
Created |
Created container: manager |
openshift-marketplace |
kubelet |
redhat-marketplace-9mxhg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
| (x3) | metallb-system |
kubelet |
metallb-operator-controller-manager-6f8cddc44c-f979v |
Started |
Started container manager |
openshift-marketplace |
kubelet |
redhat-marketplace-9mxhg |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-9mxhg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 729ms (729ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-9mxhg |
Created |
Created container: registry-server | |
| (x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202511181540 |
NeedsReinstall |
installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability. |
| (x3) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202511181540 |
AllRequirementsMet |
all requirements found, attempting install |
| (x3) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202511181540 |
InstallSucceeded |
waiting for install components to report healthy |
| (x3) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202511181540 |
InstallWaiting |
installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability. |
openshift-marketplace |
kubelet |
redhat-marketplace-9mxhg |
Killing |
Stopping container registry-server | |
metallb-system |
metallb-operator-controller-manager-6f8cddc44c-f979v_f9016896-279f-4e7a-92b6-97672b5196df |
metallb.io.metallboperator |
LeaderElection |
metallb-operator-controller-manager-6f8cddc44c-f979v_f9016896-279f-4e7a-92b6-97672b5196df became leader | |
| (x3) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202511181540 |
InstallSucceeded |
install strategy completed with no errors |
default |
endpoint-controller |
keystone-internal |
FailedToCreateEndpoint |
Failed to create endpoint for service openstack/keystone-internal: endpoints "keystone-internal" already exists | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
multus |
certified-operators-dwgbt |
AddedInterface |
Add eth0 [10.128.1.51/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-dwgbt |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-dwgbt |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-dwgbt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-dwgbt |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-dwgbt |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 606ms (607ms including waiting). Image size: 1205106509 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-dwgbt |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-dwgbt |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-dwgbt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 405ms (405ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-dwgbt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
certified-operators-dwgbt |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-dwgbt |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
certified-operators-dwgbt |
Killing |
Stopping container registry-server | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29414190 | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29414190 |
SuccessfulCreate |
Created pod: collect-profiles-29414190-mgzzv | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29414190-mgzzv |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29414190-mgzzv |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29414190-mgzzv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29414190-mgzzv |
AddedInterface |
Add eth0 [10.128.1.52/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29414190 |
Completed |
Job completed | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29414190, condition: Complete | |
openshift-marketplace |
kubelet |
community-operators-722qv |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-722qv |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-722qv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
multus |
community-operators-722qv |
AddedInterface |
Add eth0 [10.128.1.53/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-722qv |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-722qv |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-722qv |
Started |
Started container extract-content | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
kubelet |
community-operators-722qv |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 3.734s (3.734s including waiting). Image size: 1201438029 bytes. | |
openshift-marketplace |
kubelet |
community-operators-722qv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 527ms (527ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
community-operators-722qv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
community-operators-722qv |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-722qv |
Started |
Started container registry-server | |
openshift-marketplace |
multus |
redhat-marketplace-7bh6j |
AddedInterface |
Add eth0 [10.128.1.54/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-7bh6j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-7bh6j |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-7bh6j |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-7bh6j |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-rxhpq |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-7bh6j |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 984ms (984ms including waiting). Image size: 1129027903 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-7bh6j |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-7bh6j |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-7bh6j |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-marketplace-7bh6j |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-7bh6j |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-7bh6j |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 408ms (408ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-7bh6j |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-nlk69 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
multus |
redhat-operators-nlk69 |
AddedInterface |
Add eth0 [10.128.1.55/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-nlk69 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-nlk69 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-nlk69 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 800ms (800ms including waiting). Image size: 1610175307 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-nlk69 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-nlk69 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-nlk69 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-nlk69 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 512ms (512ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-nlk69 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-operators-nlk69 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-nlk69 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-nlk69 |
Killing |
Stopping container registry-server | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
multus |
certified-operators-kgvrg |
AddedInterface |
Add eth0 [10.128.1.56/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-kgvrg |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-kgvrg |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-kgvrg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-kgvrg |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-kgvrg |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 799ms (799ms including waiting). Image size: 1205106509 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-kgvrg |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-kgvrg |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-kgvrg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
certified-operators-kgvrg |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
certified-operators-kgvrg |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-kgvrg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 443ms (443ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-kgvrg |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
community-operators-tzlrc |
Created |
Created container: extract-utilities | |
openshift-marketplace |
multus |
community-operators-tzlrc |
AddedInterface |
Add eth0 [10.128.1.57/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-tzlrc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-tzlrc |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-tzlrc |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-tzlrc |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 901ms (901ms including waiting). Image size: 1201438029 bytes. | |
openshift-marketplace |
kubelet |
community-operators-tzlrc |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-tzlrc |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
community-operators-tzlrc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
community-operators-tzlrc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 423ms (423ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
community-operators-tzlrc |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
community-operators-tzlrc |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-tzlrc |
Killing |
Stopping container registry-server | |
openshift-marketplace |
multus |
redhat-marketplace-6vhlg |
AddedInterface |
Add eth0 [10.128.1.58/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-6vhlg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-6vhlg |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-6vhlg |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-6vhlg |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-6vhlg |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 605ms (605ms including waiting). Image size: 1129027903 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-6vhlg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 404ms (404ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-6vhlg |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-6vhlg |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-6vhlg |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-6vhlg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-marketplace-6vhlg |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-6vhlg |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-97qj5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-marketplace |
multus |
redhat-operators-97qj5 |
AddedInterface |
Add eth0 [10.128.1.59/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-97qj5 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-97qj5 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-97qj5 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-97qj5 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-97qj5 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-97qj5 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 818ms (818ms including waiting). Image size: 1610175307 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-97qj5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" | |
openshift-marketplace |
kubelet |
redhat-operators-97qj5 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-97qj5 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-97qj5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 490ms (490ms including waiting). Image size: 912722556 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-97qj5 |
Killing |
Stopping container registry-server | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29414205 |
SuccessfulCreate |
Created pod: collect-profiles-29414205-lw5ls | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29414205 | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29414205-lw5ls |
AddedInterface |
Add eth0 [10.128.1.60/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29414205-lw5ls |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29414205-lw5ls |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29414205-lw5ls |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29414205, condition: Complete | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulDelete |
Deleted job collect-profiles-29414160 | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29414205 |
Completed |
Job completed | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-must-gather-mxk7d namespace |