| Time | Namespace | Component | RelatedObject | Reason | Message |
|---|---|---|---|---|---|
openshift-nmstate |
nmstate-handler-44nvt |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-handler-44nvt to master-0 | ||
openshift-multus |
cni-sysctl-allowlist-ds-kl9jm |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-kl9jm to master-0 | ||
openstack-operators |
heat-operator-controller-manager-69f49c598c-ngkpp |
Scheduled |
Successfully assigned openstack-operators/heat-operator-controller-manager-69f49c598c-ngkpp to master-0 | ||
openshift-machine-api |
control-plane-machine-set-operator |
ClusterIPNotAllocated |
Cluster IP [IPv4]:172.30.131.46 is not allocated; repairing | ||
openstack-operators |
barbican-operator-controller-manager-868647ff47-58dhd |
Scheduled |
Successfully assigned openstack-operators/barbican-operator-controller-manager-868647ff47-58dhd to master-0 | ||
openshift-multus |
multus-admission-controller-bb4ff5654-mmnxt |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-bb4ff5654-mmnxt to master-0 | ||
openshift-cloud-controller-manager-operator |
cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd to master-0 | ||
openshift-nmstate |
nmstate-metrics-58c85c668d-xtbrb |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-metrics-58c85c668d-xtbrb to master-0 | ||
openshift-monitoring |
thanos-querier-85c85bc675-62rqj |
Scheduled |
Successfully assigned openshift-monitoring/thanos-querier-85c85bc675-62rqj to master-0 | ||
openshift-machine-api |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
Scheduled |
Successfully assigned openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7 to master-0 | ||
cert-manager |
cert-manager-545d4d4674-xrzb8 |
Scheduled |
Successfully assigned cert-manager/cert-manager-545d4d4674-xrzb8 to master-0 | ||
sushy-emulator |
sushy-emulator-64488c485f-5kt65 |
Scheduled |
Successfully assigned sushy-emulator/sushy-emulator-64488c485f-5kt65 to master-0 | ||
sushy-emulator |
sushy-emulator-58f4c9b998-jd8tg |
Scheduled |
Successfully assigned sushy-emulator/sushy-emulator-58f4c9b998-jd8tg to master-0 | ||
sushy-emulator |
nova-console-recorder-7ccbcf9885-b7b8v |
Scheduled |
Successfully assigned sushy-emulator/nova-console-recorder-7ccbcf9885-b7b8v to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
openstack-operators |
watcher-operator-controller-manager-5db88f68c-ctk27 |
Scheduled |
Successfully assigned openstack-operators/watcher-operator-controller-manager-5db88f68c-ctk27 to master-0 | ||
openstack-operators |
test-operator-controller-manager-7866795846-2vx66 |
Scheduled |
Successfully assigned openstack-operators/test-operator-controller-manager-7866795846-2vx66 to master-0 | ||
cert-manager |
cert-manager-cainjector-5545bd876-62r82 |
Scheduled |
Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-62r82 to master-0 | ||
openstack-operators |
telemetry-operator-controller-manager-7f45b4ff68-wk82b |
Scheduled |
Successfully assigned openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wk82b to master-0 | ||
openstack-operators |
swift-operator-controller-manager-68f46476f-zdksg |
Scheduled |
Successfully assigned openstack-operators/swift-operator-controller-manager-68f46476f-zdksg to master-0 | ||
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-hqlr5 |
Scheduled |
Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hqlr5 to master-0 | ||
openstack-operators |
placement-operator-controller-manager-8497b45c89-dbcqg |
Scheduled |
Successfully assigned openstack-operators/placement-operator-controller-manager-8497b45c89-dbcqg to master-0 | ||
openstack-operators |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
Scheduled |
Successfully assigned openstack-operators/ovn-operator-controller-manager-d44cf6b75-gwh4x to master-0 | ||
openstack-operators |
openstack-operator-index-gzfb5 |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-index-gzfb5 to master-0 | ||
openstack-operators |
openstack-operator-index-chx5x |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-index-chx5x to master-0 | ||
openstack-operators |
openstack-operator-controller-manager-74d597bfd6-98qgl |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl to master-0 | ||
cert-manager |
cert-manager-webhook-6888856db4-d6jf7 |
Scheduled |
Successfully assigned cert-manager/cert-manager-webhook-6888856db4-d6jf7 to master-0 | ||
openstack-operators |
openstack-operator-controller-init-7f8db498b4-66blt |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-init-7f8db498b4-66blt to master-0 | ||
openstack-operators |
openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn |
Scheduled |
Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn to master-0 | ||
openstack-operators |
octavia-operator-controller-manager-69f8888797-6sx67 |
Scheduled |
Successfully assigned openstack-operators/octavia-operator-controller-manager-69f8888797-6sx67 to master-0 | ||
openstack-operators |
nova-operator-controller-manager-567668f5cf-2td54 |
Scheduled |
Successfully assigned openstack-operators/nova-operator-controller-manager-567668f5cf-2td54 to master-0 | ||
openstack-operators |
neutron-operator-controller-manager-64ddbf8bb-5mtgr |
Scheduled |
Successfully assigned openstack-operators/neutron-operator-controller-manager-64ddbf8bb-5mtgr to master-0 | ||
openstack-operators |
mariadb-operator-controller-manager-6994f66f48-dgqgn |
Scheduled |
Successfully assigned openstack-operators/mariadb-operator-controller-manager-6994f66f48-dgqgn to master-0 | ||
openstack-operators |
manila-operator-controller-manager-54f6768c69-fnw4p |
Scheduled |
Successfully assigned openstack-operators/manila-operator-controller-manager-54f6768c69-fnw4p to master-0 | ||
openstack-operators |
keystone-operator-controller-manager-b4d948c87-xnzn6 |
Scheduled |
Successfully assigned openstack-operators/keystone-operator-controller-manager-b4d948c87-xnzn6 to master-0 | ||
openstack-operators |
ironic-operator-controller-manager-554564d7fc-x78p9 |
Scheduled |
Successfully assigned openstack-operators/ironic-operator-controller-manager-554564d7fc-x78p9 to master-0 | ||
openstack-operators |
infra-operator-controller-manager-5f879c76b6-2x4ww |
Scheduled |
Successfully assigned openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww to master-0 | ||
openshift-console |
console-86d4dfb9dd-rz6cj |
Scheduled |
Successfully assigned openshift-console/console-86d4dfb9dd-rz6cj to master-0 | ||
openstack |
cinder-04ef3-scheduler-0 |
Scheduled |
Successfully assigned openstack/cinder-04ef3-scheduler-0 to master-0 | ||
openstack-operators |
horizon-operator-controller-manager-5b9b8895d5-2wdk9 |
Scheduled |
Successfully assigned openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2wdk9 to master-0 | ||
openstack-operators |
heat-operator-controller-manager-69f49c598c-ngkpp |
Scheduled |
Successfully assigned openstack-operators/heat-operator-controller-manager-69f49c598c-ngkpp to master-0 | ||
openshift-cluster-storage-operator |
cluster-storage-operator-75b869db96-qbmw5 |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-qbmw5 to master-0 | ||
openshift-nmstate |
nmstate-handler-44nvt |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-handler-44nvt to master-0 | ||
openshift-console |
console-98f66b5dc-p2gxf |
Scheduled |
Successfully assigned openshift-console/console-98f66b5dc-p2gxf to master-0 | ||
openstack |
cinder-04ef3-volume-lvm-iscsi-0 |
Scheduled |
Successfully assigned openstack/cinder-04ef3-volume-lvm-iscsi-0 to master-0 | ||
openstack |
cinder-04ef3-volume-lvm-iscsi-0 |
Scheduled |
Successfully assigned openstack/cinder-04ef3-volume-lvm-iscsi-0 to master-0 | ||
openstack |
cinder-be98-account-create-update-ccwpm |
Scheduled |
Successfully assigned openstack/cinder-be98-account-create-update-ccwpm to master-0 | ||
openshift-storage |
lvms-operator-59b4cb8ccf-q5dk5 |
Scheduled |
Successfully assigned openshift-storage/lvms-operator-59b4cb8ccf-q5dk5 to master-0 | ||
openstack |
cinder-04ef3-scheduler-0 |
Scheduled |
Successfully assigned openstack/cinder-04ef3-scheduler-0 to master-0 | ||
openshift-machine-api |
machine-api-operator-bd7dd5c46-g6fgz |
Scheduled |
Successfully assigned openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz to master-0 | ||
openshift-cloud-credential-operator |
cloud-credential-operator-595c8f9ff-p8hbc |
Scheduled |
Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-p8hbc to master-0 | ||
openshift-network-diagnostics |
network-check-source-7d8f4c8c66-fc8n7 |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-source-7d8f4c8c66-fc8n7 to master-0 | ||
openstack |
cinder-04ef3-db-sync-smx72 |
Scheduled |
Successfully assigned openstack/cinder-04ef3-db-sync-smx72 to master-0 | ||
openstack |
cinder-04ef3-backup-0 |
Scheduled |
Successfully assigned openstack/cinder-04ef3-backup-0 to master-0 | ||
openshift-console |
console-6f45cc898f-z9tb2 |
Scheduled |
Successfully assigned openshift-console/console-6f45cc898f-z9tb2 to master-0 | ||
openstack-operators |
nova-operator-controller-manager-567668f5cf-2td54 |
Scheduled |
Successfully assigned openstack-operators/nova-operator-controller-manager-567668f5cf-2td54 to master-0 | ||
openstack-operators |
octavia-operator-controller-manager-69f8888797-6sx67 |
Scheduled |
Successfully assigned openstack-operators/octavia-operator-controller-manager-69f8888797-6sx67 to master-0 | ||
openstack-operators |
glance-operator-controller-manager-77987464f4-sqmnn |
Scheduled |
Successfully assigned openstack-operators/glance-operator-controller-manager-77987464f4-sqmnn to master-0 | ||
openstack-operators |
designate-operator-controller-manager-6d8bf5c495-nn59f |
Scheduled |
Successfully assigned openstack-operators/designate-operator-controller-manager-6d8bf5c495-nn59f to master-0 | ||
openstack-operators |
cinder-operator-controller-manager-5d946d989d-6mnh8 |
Scheduled |
Successfully assigned openstack-operators/cinder-operator-controller-manager-5d946d989d-6mnh8 to master-0 | ||
openshift-network-diagnostics |
network-check-source-7d8f4c8c66-fc8n7 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-insights |
insights-operator-cb4f7b4cf-cmbjq |
Scheduled |
Successfully assigned openshift-insights/insights-operator-cb4f7b4cf-cmbjq to master-0 | ||
openshift-network-console |
networking-console-plugin-bd6d6f87f-72mnn |
Scheduled |
Successfully assigned openshift-network-console/networking-console-plugin-bd6d6f87f-72mnn to master-0 | ||
openshift-nmstate |
nmstate-console-plugin-5c78fc5d65-c9ckb |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c9ckb to master-0 | ||
openshift-cluster-samples-operator |
cluster-samples-operator-f8cbff74c-hr9g4 |
Scheduled |
Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-hr9g4 to master-0 | ||
openshift-cluster-version |
cluster-version-operator-649c4f5445-7kdb7 |
Scheduled |
Successfully assigned openshift-cluster-version/cluster-version-operator-649c4f5445-7kdb7 to master-0 | ||
openstack |
dnsmasq-dns-6fd49994df-55jsp |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-6fd49994df-55jsp to master-0 | ||
openstack |
cinder-db-create-5fmzp |
Scheduled |
Successfully assigned openstack/cinder-db-create-5fmzp to master-0 | ||
openstack |
dnsmasq-dns-5c7b6fb887-tpv9d |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-5c7b6fb887-tpv9d to master-0 | ||
openstack |
dnsmasq-dns-5f4c4c4d6c-fsk8m |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-5f4c4c4d6c-fsk8m to master-0 | ||
openstack |
dnsmasq-dns-676f54c559-bfcw7 |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-676f54c559-bfcw7 to master-0 | ||
openstack |
dnsmasq-dns-67dc4d787c-m7s4w |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-67dc4d787c-m7s4w to master-0 | ||
openshift-monitoring |
kube-state-metrics-7cc9598d54-z7lzs |
Scheduled |
Successfully assigned openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs to master-0 | ||
openstack-operators |
barbican-operator-controller-manager-868647ff47-58dhd |
Scheduled |
Successfully assigned openstack-operators/barbican-operator-controller-manager-868647ff47-58dhd to master-0 | ||
openstack-operators |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Scheduled |
Successfully assigned openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p to master-0 | ||
openshift-operators |
perses-operator-5bf474d74f-tw9pm |
Scheduled |
Successfully assigned openshift-operators/perses-operator-5bf474d74f-tw9pm to master-0 | ||
openshift-operators |
observability-operator-59bdc8b94-d8nkj |
Scheduled |
Successfully assigned openshift-operators/observability-operator-59bdc8b94-d8nkj to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7 |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7 to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s to master-0 | ||
openstack |
dnsmasq-dns-6fd854f54c-g52n4 |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-6fd854f54c-g52n4 to master-0 | ||
openshift-operators |
obo-prometheus-operator-68bc856cb9-5tqc8 |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-68bc856cb9-5tqc8 to master-0 | ||
openshift-ingress-canary |
ingress-canary-6bhf8 |
Scheduled |
Successfully assigned openshift-ingress-canary/ingress-canary-6bhf8 to master-0 | ||
openstack-operators |
keystone-operator-controller-manager-b4d948c87-xnzn6 |
Scheduled |
Successfully assigned openstack-operators/keystone-operator-controller-manager-b4d948c87-xnzn6 to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-68f4c9ccfc-vg949 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-68f4c9ccfc-vg949 to master-0 | ||
openshift-ingress |
router-default-864ddd5f56-g8w2f |
Scheduled |
Successfully assigned openshift-ingress/router-default-864ddd5f56-g8w2f to master-0 | ||
openshift-ingress |
router-default-864ddd5f56-g8w2f |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-machine-api |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
Scheduled |
Successfully assigned openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-hmpc7 to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
openshift-machine-api |
control-plane-machine-set-operator |
ClusterIPNotAllocated |
Cluster IP [IPv4]:172.30.131.46 is not allocated; repairing | ||
openshift-machine-api |
machine-api-operator-bd7dd5c46-g6fgz |
Scheduled |
Successfully assigned openshift-machine-api/machine-api-operator-bd7dd5c46-g6fgz to master-0 | ||
openshift-nmstate |
nmstate-metrics-58c85c668d-xtbrb |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-metrics-58c85c668d-xtbrb to master-0 | ||
openshift-storage |
vg-manager-5rvk7 |
Scheduled |
Successfully assigned openshift-storage/vg-manager-5rvk7 to master-0 | ||
openshift-monitoring |
node-exporter-rttp2 |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-rttp2 to master-0 | ||
openshift-nmstate |
nmstate-operator-694c9596b7-vbkqw |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-operator-694c9596b7-vbkqw to master-0 | ||
openshift-monitoring |
monitoring-plugin-6f86647c68-r4plh |
Scheduled |
Successfully assigned openshift-monitoring/monitoring-plugin-6f86647c68-r4plh to master-0 | ||
openshift-route-controller-manager |
route-controller-manager-68f4c9ccfc-vg949 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-nmstate |
nmstate-webhook-866bcb46dc-4q7kf |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-webhook-866bcb46dc-4q7kf to master-0 | ||
openshift-machine-api |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Scheduled |
Successfully assigned openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw to master-0 | ||
openshift-multus |
multus-admission-controller-6d678b8d67-rzbff |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-6d678b8d67-rzbff to master-0 | ||
openstack-operators |
cinder-operator-controller-manager-5d946d989d-6mnh8 |
Scheduled |
Successfully assigned openstack-operators/cinder-operator-controller-manager-5d946d989d-6mnh8 to master-0 | ||
openshift-controller-manager |
controller-manager-f6b44f49-s25nf |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-f6b44f49-s25nf to master-0 | ||
openshift-controller-manager |
controller-manager-f6b44f49-s25nf |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-image-registry |
node-ca-knz2d |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-knz2d to master-0 | ||
metallb-system |
speaker-mj82t |
Scheduled |
Successfully assigned metallb-system/speaker-mj82t to master-0 | ||
openstack-operators |
manila-operator-controller-manager-54f6768c69-fnw4p |
Scheduled |
Successfully assigned openstack-operators/manila-operator-controller-manager-54f6768c69-fnw4p to master-0 | ||
metallb-system |
metallb-operator-webhook-server-7664575c4d-8f7gv |
Scheduled |
Successfully assigned metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
metallb-system |
metallb-operator-controller-manager-7f874cc45d-jsprx |
Scheduled |
Successfully assigned metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx to master-0 | ||
metallb-system |
frr-k8s-webhook-server-78b44bf5bb-x52ls |
Scheduled |
Successfully assigned metallb-system/frr-k8s-webhook-server-78b44bf5bb-x52ls to master-0 | ||
metallb-system |
frr-k8s-t5g7s |
Scheduled |
Successfully assigned metallb-system/frr-k8s-t5g7s to master-0 | ||
metallb-system |
controller-69bbfbf88f-8w79x |
Scheduled |
Successfully assigned metallb-system/controller-69bbfbf88f-8w79x to master-0 | ||
cert-manager |
cert-manager-webhook-6888856db4-d6jf7 |
Scheduled |
Successfully assigned cert-manager/cert-manager-webhook-6888856db4-d6jf7 to master-0 | ||
cert-manager |
cert-manager-cainjector-5545bd876-62r82 |
Scheduled |
Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-62r82 to master-0 | ||
cert-manager |
cert-manager-545d4d4674-xrzb8 |
Scheduled |
Successfully assigned cert-manager/cert-manager-545d4d4674-xrzb8 to master-0 | ||
openshift-authentication |
oauth-openshift-56d478877c-mlr8b |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-56d478877c-mlr8b to master-0 | ||
openshift-authentication |
oauth-openshift-56d478877c-mlr8b |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-56d478877c-mlr8b |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openstack |
glance-7b9c2-default-external-api-0 |
Scheduled |
Successfully assigned openstack/glance-7b9c2-default-external-api-0 to master-0 | ||
openstack-operators |
watcher-operator-controller-manager-5db88f68c-ctk27 |
Scheduled |
Successfully assigned openstack-operators/watcher-operator-controller-manager-5db88f68c-ctk27 to master-0 | ||
openstack-operators |
test-operator-controller-manager-7866795846-2vx66 |
Scheduled |
Successfully assigned openstack-operators/test-operator-controller-manager-7866795846-2vx66 to master-0 | ||
openstack-operators |
telemetry-operator-controller-manager-7f45b4ff68-wk82b |
Scheduled |
Successfully assigned openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wk82b to master-0 | ||
openstack-operators |
swift-operator-controller-manager-68f46476f-zdksg |
Scheduled |
Successfully assigned openstack-operators/swift-operator-controller-manager-68f46476f-zdksg to master-0 | ||
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-hqlr5 |
Scheduled |
Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hqlr5 to master-0 | ||
openstack-operators |
placement-operator-controller-manager-8497b45c89-dbcqg |
Scheduled |
Successfully assigned openstack-operators/placement-operator-controller-manager-8497b45c89-dbcqg to master-0 | ||
openshift-machine-api |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Scheduled |
Successfully assigned openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr to master-0 | ||
openstack-operators |
ironic-operator-controller-manager-554564d7fc-x78p9 |
Scheduled |
Successfully assigned openstack-operators/ironic-operator-controller-manager-554564d7fc-x78p9 to master-0 | ||
openshift-multus |
multus-admission-controller-6d678b8d67-rzbff |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-6d678b8d67-rzbff to master-0 | ||
openshift-machine-config-operator |
machine-config-controller-686c884b4d-5q97f |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-controller-686c884b4d-5q97f to master-0 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29522415-dwsg2 |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29522415-dwsg2 to master-0 | ||
openshift-monitoring |
metrics-server-75c4d5b7f-t6zcq |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-75c4d5b7f-t6zcq to master-0 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29522400-hgd4s |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29522400-hgd4s to master-0 | ||
openshift-monitoring |
metrics-server-f94977f65-sgf5z |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-f94977f65-sgf5z to master-0 | ||
openshift-console |
console-5995fb765-xddwx |
Scheduled |
Successfully assigned openshift-console/console-5995fb765-xddwx to master-0 | ||
openstack |
dnsmasq-dns-68b4779d45-4ql8j |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-68b4779d45-4ql8j to master-0 | ||
openshift-console |
downloads-dcd7b7d95-vtnfs |
Scheduled |
Successfully assigned openshift-console/downloads-dcd7b7d95-vtnfs to master-0 | ||
openstack |
dnsmasq-dns-6b98d7b55c-hdh27 |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-6b98d7b55c-hdh27 to master-0 | ||
openstack |
dnsmasq-dns-6b9c77ddfc-d9zgc |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-6b9c77ddfc-d9zgc to master-0 | ||
openstack-operators |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Scheduled |
Successfully assigned openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p to master-0 | ||
openshift-monitoring |
telemeter-client-7fbdcd9689-spqtt |
Scheduled |
Successfully assigned openshift-monitoring/telemeter-client-7fbdcd9689-spqtt to master-0 | ||
openshift-console |
console-55495f9f9c-p58l5 |
Scheduled |
Successfully assigned openshift-console/console-55495f9f9c-p58l5 to master-0 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29522385-7rwjt |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29522385-7rwjt to master-0 | ||
openshift-machine-api |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Scheduled |
Successfully assigned openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-6dzpr to master-0 | ||
openshift-monitoring |
kube-state-metrics-7cc9598d54-z7lzs |
Scheduled |
Successfully assigned openshift-monitoring/kube-state-metrics-7cc9598d54-z7lzs to master-0 | ||
openshift-monitoring |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | ||
openshift-monitoring |
metrics-server-75c4d5b7f-t6zcq |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-75c4d5b7f-t6zcq to master-0 | ||
openshift-monitoring |
metrics-server-f94977f65-sgf5z |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-f94977f65-sgf5z to master-0 | ||
openshift-monitoring |
monitoring-plugin-6f86647c68-r4plh |
Scheduled |
Successfully assigned openshift-monitoring/monitoring-plugin-6f86647c68-r4plh to master-0 | ||
openshift-monitoring |
node-exporter-rttp2 |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-rttp2 to master-0 | ||
openshift-monitoring |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | ||
openshift-marketplace |
redhat-operators-wzsv7 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-wzsv7 to master-0 | ||
openshift-marketplace |
redhat-marketplace-7dzgz |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-7dzgz to master-0 | ||
openshift-machine-config-operator |
machine-config-daemon-r6sfp |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-r6sfp to master-0 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29522370-xqzfs |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29522370-xqzfs to master-0 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29522355-rfrsq |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29522355-rfrsq to master-0 | ||
openshift-machine-config-operator |
machine-config-operator-84976bb859-kmc95 |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-operator-84976bb859-kmc95 to master-0 | ||
openshift-console-operator |
console-operator-7777d5cc66-w62mx |
Scheduled |
Successfully assigned openshift-console-operator/console-operator-7777d5cc66-w62mx to master-0 | ||
openstack-operators |
horizon-operator-controller-manager-5b9b8895d5-2wdk9 |
Scheduled |
Successfully assigned openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2wdk9 to master-0 | ||
openshift-multus |
cni-sysctl-allowlist-ds-kl9jm |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-kl9jm to master-0 | ||
openstack |
swift-storage-0 |
Scheduled |
Successfully assigned openstack/swift-storage-0 to master-0 | ||
openstack |
swift-ring-rebalance-4xb95 |
Scheduled |
Successfully assigned openstack/swift-ring-rebalance-4xb95 to master-0 | ||
openshift-multus |
cni-sysctl-allowlist-ds-c5mq6 |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-c5mq6 to master-0 | ||
openshift-monitoring |
thanos-querier-85c85bc675-62rqj |
Scheduled |
Successfully assigned openshift-monitoring/thanos-querier-85c85bc675-62rqj to master-0 | ||
openshift-monitoring |
telemeter-client-7fbdcd9689-spqtt |
Scheduled |
Successfully assigned openshift-monitoring/telemeter-client-7fbdcd9689-spqtt to master-0 | ||
openshift-monitoring |
telemeter-client-7fbdcd9689-jnzwg |
Scheduled |
Successfully assigned openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg to master-0 | ||
openshift-monitoring |
telemeter-client-7fbdcd9689-jnzwg |
Scheduled |
Successfully assigned openshift-monitoring/telemeter-client-7fbdcd9689-jnzwg to master-0 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29522340-8cp6h |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29522340-8cp6h to master-0 | ||
openshift-cluster-machine-approver |
machine-approver-6c46d95f74-nsmfx |
Scheduled |
Successfully assigned openshift-cluster-machine-approver/machine-approver-6c46d95f74-nsmfx to master-0 | ||
openstack-operators |
openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn |
Scheduled |
Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn to master-0 | ||
openstack-operators |
mariadb-operator-controller-manager-6994f66f48-dgqgn |
Scheduled |
Successfully assigned openstack-operators/mariadb-operator-controller-manager-6994f66f48-dgqgn to master-0 | ||
openstack |
swift-proxy-67bfcfbcf8-m9tkq |
Scheduled |
Successfully assigned openstack/swift-proxy-67bfcfbcf8-m9tkq to master-0 | ||
openstack |
root-account-create-update-tdkt8 |
Scheduled |
Successfully assigned openstack/root-account-create-update-tdkt8 to master-0 | ||
openstack |
root-account-create-update-sqtzz |
Scheduled |
Successfully assigned openstack/root-account-create-update-sqtzz to master-0 | ||
openshift-machine-config-operator |
machine-config-server-l576h |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-server-l576h to master-0 | ||
openshift-marketplace |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx |
Scheduled |
Successfully assigned openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx to master-0 | ||
openshift-operator-lifecycle-manager |
collect-profiles-29522340-8cp6h |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
metallb-system |
controller-69bbfbf88f-8w79x |
Scheduled |
Successfully assigned metallb-system/controller-69bbfbf88f-8w79x to master-0 | ||
openstack |
rabbitmq-server-0 |
Scheduled |
Successfully assigned openstack/rabbitmq-server-0 to master-0 | ||
openstack |
rabbitmq-cell1-server-0 |
Scheduled |
Successfully assigned openstack/rabbitmq-cell1-server-0 to master-0 | ||
openstack |
placement-db-sync-tgjmt |
Scheduled |
Successfully assigned openstack/placement-db-sync-tgjmt to master-0 | ||
openstack |
placement-db-create-kjk8x |
Scheduled |
Successfully assigned openstack/placement-db-create-kjk8x to master-0 | ||
openstack |
placement-5fd74d8d4b-qd7wh |
Scheduled |
Successfully assigned openstack/placement-5fd74d8d4b-qd7wh to master-0 | ||
openstack |
placement-5b57c6d9b6-frt4v |
Scheduled |
Successfully assigned openstack/placement-5b57c6d9b6-frt4v to master-0 | ||
openstack |
placement-094f-account-create-update-9dg59 |
Scheduled |
Successfully assigned openstack/placement-094f-account-create-update-9dg59 to master-0 | ||
openstack |
ovsdbserver-sb-0 |
Scheduled |
Successfully assigned openstack/ovsdbserver-sb-0 to master-0 | ||
openstack |
ovsdbserver-nb-0 |
Scheduled |
Successfully assigned openstack/ovsdbserver-nb-0 to master-0 | ||
openstack |
ovn-northd-0 |
Scheduled |
Successfully assigned openstack/ovn-northd-0 to master-0 | ||
openstack |
ovn-controller-ovs-fxgqd |
Scheduled |
Successfully assigned openstack/ovn-controller-ovs-fxgqd to master-0 | ||
metallb-system |
frr-k8s-t5g7s |
Scheduled |
Successfully assigned metallb-system/frr-k8s-t5g7s to master-0 | ||
openstack |
ovn-controller-metrics-wwqh5 |
Scheduled |
Successfully assigned openstack/ovn-controller-metrics-wwqh5 to master-0 | ||
openstack |
ovn-controller-hdbmn |
Scheduled |
Successfully assigned openstack/ovn-controller-hdbmn to master-0 | ||
openstack |
openstackclient |
Scheduled |
Successfully assigned openstack/openstackclient to master-0 | ||
openstack |
openstack-galera-0 |
Scheduled |
Successfully assigned openstack/openstack-galera-0 to master-0 | ||
openstack |
openstack-cell1-galera-0 |
Scheduled |
Successfully assigned openstack/openstack-cell1-galera-0 to master-0 | ||
openstack |
nova-scheduler-0 |
Scheduled |
Successfully assigned openstack/nova-scheduler-0 to master-0 | ||
openstack |
nova-scheduler-0 |
Scheduled |
Successfully assigned openstack/nova-scheduler-0 to master-0 | ||
openstack |
nova-scheduler-0 |
Scheduled |
Successfully assigned openstack/nova-scheduler-0 to master-0 | ||
openstack |
nova-metadata-0 |
Scheduled |
Successfully assigned openstack/nova-metadata-0 to master-0 | ||
openstack |
nova-metadata-0 |
Scheduled |
Successfully assigned openstack/nova-metadata-0 to master-0 | ||
openstack |
nova-metadata-0 |
Scheduled |
Successfully assigned openstack/nova-metadata-0 to master-0 | ||
openstack |
nova-metadata-0 |
Scheduled |
Successfully assigned openstack/nova-metadata-0 to master-0 | ||
openstack |
nova-cell1-novncproxy-0 |
Scheduled |
Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0 | ||
openstack |
nova-cell1-novncproxy-0 |
Scheduled |
Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0 | ||
openstack |
nova-cell1-host-discover-7vrrr |
Scheduled |
Successfully assigned openstack/nova-cell1-host-discover-7vrrr to master-0 | ||
openstack |
nova-cell1-f7f8-account-create-update-2x5s2 |
Scheduled |
Successfully assigned openstack/nova-cell1-f7f8-account-create-update-2x5s2 to master-0 | ||
openstack |
nova-cell1-db-create-69tfm |
Scheduled |
Successfully assigned openstack/nova-cell1-db-create-69tfm to master-0 | ||
openstack |
nova-cell1-conductor-db-sync-4vxwz |
Scheduled |
Successfully assigned openstack/nova-cell1-conductor-db-sync-4vxwz to master-0 | ||
openstack |
nova-cell1-conductor-0 |
Scheduled |
Successfully assigned openstack/nova-cell1-conductor-0 to master-0 | ||
openstack |
nova-cell1-compute-ironic-compute-0 |
Scheduled |
Successfully assigned openstack/nova-cell1-compute-ironic-compute-0 to master-0 | ||
openstack |
nova-cell1-cell-mapping-5x59m |
Scheduled |
Successfully assigned openstack/nova-cell1-cell-mapping-5x59m to master-0 | ||
openstack |
nova-cell0-db-create-pbs2f |
Scheduled |
Successfully assigned openstack/nova-cell0-db-create-pbs2f to master-0 | ||
openstack |
nova-cell0-conductor-db-sync-8gbxf |
Scheduled |
Successfully assigned openstack/nova-cell0-conductor-db-sync-8gbxf to master-0 | ||
openstack |
nova-cell0-conductor-0 |
Scheduled |
Successfully assigned openstack/nova-cell0-conductor-0 to master-0 | ||
openstack |
nova-cell0-cell-mapping-9btmx |
Scheduled |
Successfully assigned openstack/nova-cell0-cell-mapping-9btmx to master-0 | ||
metallb-system |
frr-k8s-webhook-server-78b44bf5bb-x52ls |
Scheduled |
Successfully assigned metallb-system/frr-k8s-webhook-server-78b44bf5bb-x52ls to master-0 | ||
openstack |
nova-cell0-5cd4-account-create-update-hwzx4 |
Scheduled |
Successfully assigned openstack/nova-cell0-5cd4-account-create-update-hwzx4 to master-0 | ||
openstack |
nova-api-db-create-4lmzn |
Scheduled |
Successfully assigned openstack/nova-api-db-create-4lmzn to master-0 | ||
openstack |
nova-api-87e5-account-create-update-45dj5 |
Scheduled |
Successfully assigned openstack/nova-api-87e5-account-create-update-45dj5 to master-0 | ||
openstack |
nova-api-0 |
Scheduled |
Successfully assigned openstack/nova-api-0 to master-0 | ||
sushy-emulator |
nova-console-poller-76bf7fdbf7-kfl2c |
Scheduled |
Successfully assigned sushy-emulator/nova-console-poller-76bf7fdbf7-kfl2c to master-0 | ||
openshift-authentication |
oauth-openshift-5cdd6dbfff-tvzt9 |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-5cdd6dbfff-tvzt9 to master-0 | ||
openshift-monitoring |
prometheus-operator-admission-webhook-695b766898-nm8rs |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs to master-0 | ||
openstack |
neutron-db-sync-kr2xk |
Scheduled |
Successfully assigned openstack/neutron-db-sync-kr2xk to master-0 | ||
openstack |
neutron-db-create-g9g6p |
Scheduled |
Successfully assigned openstack/neutron-db-create-g9g6p to master-0 | ||
metallb-system |
metallb-operator-controller-manager-7f874cc45d-jsprx |
Scheduled |
Successfully assigned metallb-system/metallb-operator-controller-manager-7f874cc45d-jsprx to master-0 | ||
openstack |
neutron-7c6d47966f-zhq5k |
Scheduled |
Successfully assigned openstack/neutron-7c6d47966f-zhq5k to master-0 | ||
openstack |
neutron-5c5cd8d-bjbtl |
Scheduled |
Successfully assigned openstack/neutron-5c5cd8d-bjbtl to master-0 | ||
openstack |
neutron-406d-account-create-update-qv9dz |
Scheduled |
Successfully assigned openstack/neutron-406d-account-create-update-qv9dz to master-0 | ||
openstack |
memcached-0 |
Scheduled |
Successfully assigned openstack/memcached-0 to master-0 | ||
openstack |
keystone-db-sync-dqtpw |
Scheduled |
Successfully assigned openstack/keystone-db-sync-dqtpw to master-0 | ||
openstack |
keystone-db-create-trh26 |
Scheduled |
Successfully assigned openstack/keystone-db-create-trh26 to master-0 | ||
openstack |
keystone-cron-29522401-79wwl |
Scheduled |
Successfully assigned openstack/keystone-cron-29522401-79wwl to master-0 | ||
metallb-system |
metallb-operator-webhook-server-7664575c4d-8f7gv |
Scheduled |
Successfully assigned metallb-system/metallb-operator-webhook-server-7664575c4d-8f7gv to master-0 | ||
openstack |
keystone-bootstrap-lc5mm |
Scheduled |
Successfully assigned openstack/keystone-bootstrap-lc5mm to master-0 | ||
openstack |
keystone-bootstrap-7jqwh |
Scheduled |
Successfully assigned openstack/keystone-bootstrap-7jqwh to master-0 | ||
openstack |
keystone-7f77fccc4f-8svgt |
Scheduled |
Successfully assigned openstack/keystone-7f77fccc4f-8svgt to master-0 | ||
openstack |
keystone-737e-account-create-update-z4wjt |
Scheduled |
Successfully assigned openstack/keystone-737e-account-create-update-z4wjt to master-0 | ||
openstack |
ironic-neutron-agent-88dd96889-vwkh6 |
Scheduled |
Successfully assigned openstack/ironic-neutron-agent-88dd96889-vwkh6 to master-0 | ||
openstack |
ironic-inspector-db-sync-x86bq |
Scheduled |
Successfully assigned openstack/ironic-inspector-db-sync-x86bq to master-0 | ||
openstack |
ironic-inspector-db-create-vmh7f |
Scheduled |
Successfully assigned openstack/ironic-inspector-db-create-vmh7f to master-0 | ||
openstack |
ironic-inspector-016b-account-create-update-v8zdc |
Scheduled |
Successfully assigned openstack/ironic-inspector-016b-account-create-update-v8zdc to master-0 | ||
openstack |
ironic-inspector-0 |
Scheduled |
Successfully assigned openstack/ironic-inspector-0 to master-0 | ||
openstack |
ironic-inspector-0 |
Scheduled |
Successfully assigned openstack/ironic-inspector-0 to master-0 | ||
openstack |
ironic-db-sync-8zl8z |
Scheduled |
Successfully assigned openstack/ironic-db-sync-8zl8z to master-0 | ||
openstack |
ironic-db-create-hgvqn |
Scheduled |
Successfully assigned openstack/ironic-db-create-hgvqn to master-0 | ||
openstack |
ironic-conductor-0 |
Scheduled |
Successfully assigned openstack/ironic-conductor-0 to master-0 | ||
openstack |
ironic-874a-account-create-update-lhwlv |
Scheduled |
Successfully assigned openstack/ironic-874a-account-create-update-lhwlv to master-0 | ||
openstack |
ironic-7b6b8d45d-l4pv4 |
Scheduled |
Successfully assigned openstack/ironic-7b6b8d45d-l4pv4 to master-0 | ||
metallb-system |
speaker-mj82t |
Scheduled |
Successfully assigned metallb-system/speaker-mj82t to master-0 | ||
openstack |
ironic-566cf67fc4-2bm2p |
Scheduled |
Successfully assigned openstack/ironic-566cf67fc4-2bm2p to master-0 | ||
openstack |
glance-db-sync-88f2d |
Scheduled |
Successfully assigned openstack/glance-db-sync-88f2d to master-0 | ||
openstack |
glance-db-create-qfrvt |
Scheduled |
Successfully assigned openstack/glance-db-create-qfrvt to master-0 | ||
openstack |
glance-7b9c2-default-internal-api-0 |
Scheduled |
Successfully assigned openstack/glance-7b9c2-default-internal-api-0 to master-0 | ||
openstack |
glance-7b9c2-default-internal-api-0 |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods "glance-7b9c2-default-internal-api-0": StorageError: invalid object, Code: 4, Key: /kubernetes.io/pods/openstack/glance-7b9c2-default-internal-api-0, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 9292bb09-7555-4829-b89d-53e973698121, UID in object meta: ba50f35d-07b5-4db9-bc46-3ffeb03f3902 | ||
openstack |
glance-7b9c2-default-internal-api-0 |
Scheduled |
Successfully assigned openstack/glance-7b9c2-default-internal-api-0 to master-0 | ||
openstack |
glance-7b9c2-default-external-api-0 |
Scheduled |
Successfully assigned openstack/glance-7b9c2-default-external-api-0 to master-0 | ||
openstack |
glance-7b9c2-default-external-api-0 |
Scheduled |
Successfully assigned openstack/glance-7b9c2-default-external-api-0 to master-0 | ||
openshift-monitoring |
prometheus-operator-admission-webhook-695b766898-nm8rs |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-695b766898-nm8rs |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-695b766898-nm8rs to master-0 | ||
openshift-monitoring |
prometheus-operator-admission-webhook-695b766898-nm8rs |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openstack |
cinder-04ef3-backup-0 |
Scheduled |
Successfully assigned openstack/cinder-04ef3-backup-0 to master-0 | ||
openshift-marketplace |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9 |
Scheduled |
Successfully assigned openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9 to master-0 | ||
openshift-cluster-machine-approver |
machine-approver-8569dd85ff-f9g8s |
Scheduled |
Successfully assigned openshift-cluster-machine-approver/machine-approver-8569dd85ff-f9g8s to master-0 | ||
openstack-operators |
glance-operator-controller-manager-77987464f4-sqmnn |
Scheduled |
Successfully assigned openstack-operators/glance-operator-controller-manager-77987464f4-sqmnn to master-0 | ||
openshift-monitoring |
prometheus-operator-7485d645b8-nzz2j |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-7485d645b8-nzz2j to master-0 | ||
openstack |
cinder-04ef3-api-0 |
Scheduled |
Successfully assigned openstack/cinder-04ef3-api-0 to master-0 | ||
openstack |
cinder-04ef3-api-0 |
Scheduled |
Successfully assigned openstack/cinder-04ef3-api-0 to master-0 | ||
openshift-monitoring |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | ||
openshift-storage |
vg-manager-5rvk7 |
Scheduled |
Successfully assigned openshift-storage/vg-manager-5rvk7 to master-0 | ||
openshift-storage |
lvms-operator-59b4cb8ccf-q5dk5 |
Scheduled |
Successfully assigned openshift-storage/lvms-operator-59b4cb8ccf-q5dk5 to master-0 | ||
openshift-operators |
perses-operator-5bf474d74f-tw9pm |
Scheduled |
Successfully assigned openshift-operators/perses-operator-5bf474d74f-tw9pm to master-0 | ||
openstack-operators |
designate-operator-controller-manager-6d8bf5c495-nn59f |
Scheduled |
Successfully assigned openstack-operators/designate-operator-controller-manager-6d8bf5c495-nn59f to master-0 | ||
openshift-marketplace |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc |
Scheduled |
Successfully assigned openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc to master-0 | ||
openshift-monitoring |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | ||
openstack-operators |
neutron-operator-controller-manager-64ddbf8bb-5mtgr |
Scheduled |
Successfully assigned openstack-operators/neutron-operator-controller-manager-64ddbf8bb-5mtgr to master-0 | ||
openshift-monitoring |
openshift-state-metrics-546cc7d765-b4xl8 |
Scheduled |
Successfully assigned openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8 to master-0 | ||
openshift-monitoring |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | ||
openshift-operator-lifecycle-manager |
packageserver-67d4dbd88b-szr25 |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/packageserver-67d4dbd88b-szr25 to master-0 | ||
openstack-operators |
infra-operator-controller-manager-5f879c76b6-2x4ww |
Scheduled |
Successfully assigned openstack-operators/infra-operator-controller-manager-5f879c76b6-2x4ww to master-0 | ||
openshift-machine-api |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Scheduled |
Successfully assigned openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-8qkdw to master-0 | ||
openshift-operators |
observability-operator-59bdc8b94-d8nkj |
Scheduled |
Successfully assigned openshift-operators/observability-operator-59bdc8b94-d8nkj to master-0 | ||
openstack |
dnsmasq-dns-75b66f9649-znfnp |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-75b66f9649-znfnp to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7 |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7 to master-0 | ||
openshift-operators |
obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s to master-0 | ||
openshift-operators |
obo-prometheus-operator-68bc856cb9-5tqc8 |
Scheduled |
Successfully assigned openshift-operators/obo-prometheus-operator-68bc856cb9-5tqc8 to master-0 | ||
openshift-marketplace |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds |
Scheduled |
Successfully assigned openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds to master-0 | ||
openshift-monitoring |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | ||
openshift-monitoring |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0 | ||
openshift-monitoring |
prometheus-operator-7485d645b8-nzz2j |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-7485d645b8-nzz2j to master-0 | ||
openstack-operators |
openstack-operator-controller-init-7f8db498b4-66blt |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-init-7f8db498b4-66blt to master-0 | ||
openstack-operators |
openstack-operator-controller-manager-74d597bfd6-98qgl |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-controller-manager-74d597bfd6-98qgl to master-0 | ||
openstack-operators |
openstack-operator-index-chx5x |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-index-chx5x to master-0 | ||
openstack-operators |
openstack-operator-index-gzfb5 |
Scheduled |
Successfully assigned openstack-operators/openstack-operator-index-gzfb5 to master-0 | ||
openshift-monitoring |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0 | ||
openstack |
dnsmasq-dns-78d5d45447-bfqg5 |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-78d5d45447-bfqg5 to master-0 | ||
openstack |
dnsmasq-dns-7d78499c-p9rp4 |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-7d78499c-p9rp4 to master-0 | ||
openshift-nmstate |
nmstate-webhook-866bcb46dc-4q7kf |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-webhook-866bcb46dc-4q7kf to master-0 | ||
openshift-marketplace |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg |
Scheduled |
Successfully assigned openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg to master-0 | ||
openstack-operators |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
Scheduled |
Successfully assigned openstack-operators/ovn-operator-controller-manager-d44cf6b75-gwh4x to master-0 | ||
openshift-multus |
cni-sysctl-allowlist-ds-c5mq6 |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-c5mq6 to master-0 | ||
openshift-nmstate |
nmstate-operator-694c9596b7-vbkqw |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-operator-694c9596b7-vbkqw to master-0 | ||
openstack |
dnsmasq-dns-8f95c8447-f78pp |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-8f95c8447-f78pp to master-0 | ||
openshift-nmstate |
nmstate-console-plugin-5c78fc5d65-c9ckb |
Scheduled |
Successfully assigned openshift-nmstate/nmstate-console-plugin-5c78fc5d65-c9ckb to master-0 | ||
openstack |
dnsmasq-dns-c54fb858c-f69kf |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-c54fb858c-f69kf to master-0 | ||
openshift-cloud-controller-manager-operator |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c to master-0 | ||
openstack |
glance-4c91-account-create-update-b2plp |
Scheduled |
Successfully assigned openstack/glance-4c91-account-create-update-b2plp to master-0 | ||
openshift-monitoring |
openshift-state-metrics-546cc7d765-b4xl8 |
Scheduled |
Successfully assigned openshift-monitoring/openshift-state-metrics-546cc7d765-b4xl8 to master-0 | ||
openstack |
dnsmasq-dns-d687b68b9-7r7fm |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-d687b68b9-7r7fm to master-0 | ||
openstack |
dnsmasq-dns-dd74dd7c9-jfb4s |
Scheduled |
Successfully assigned openstack/dnsmasq-dns-dd74dd7c9-jfb4s to master-0 | ||
openshift-multus |
multus-admission-controller-bb4ff5654-mmnxt |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-bb4ff5654-mmnxt to master-0 | ||
kube-system |
Required control plane pods have been created | ||||
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_47dbb7f7-ecfa-444e-ad16-34deb1786c2f became leader | |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_0e3bf329-8ad5-4027-b393-f502dd9a7a25 became leader | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_8c108115-56e8-4452-afde-4cb167d3d3a2 became leader | |
kube-system |
cluster-policy-controller |
bootstrap-kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster) | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_d7acae37-8c17-4e93-9806-eff58baf2c33 became leader | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for default namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-public namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-node-lease namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for kube-system namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-version namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-etcd namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for assisted-installer namespace | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_94221fc7-bb3c-4b8a-81ef-c42efcf28f01 became leader | |
| (x2) | assisted-installer |
job-controller |
assisted-installer-controller |
FailedCreate |
Error creating: pods "assisted-installer-controller-" is forbidden: error looking up service account assisted-installer/assisted-installer-controller: serviceaccount "assisted-installer-controller" not found |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-credential-operator namespace | |
assisted-installer |
job-controller |
assisted-installer-controller |
SuccessfulCreate |
Created pod: assisted-installer-controller-5fwlz | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-operator namespace | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_a07c59aa-cfe1-4ecc-a260-06fbe1e235e7 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_a07c59aa-cfe1-4ecc-a260-06fbe1e235e7 stopped leading | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-76959b6567 to 1 | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_17131242-9c80-4d9c-8231-cd7dbb19036c became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config-operator namespace | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" architecture="amd64" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-network-config-controller namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-storage-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-marketplace namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-node-tuning-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-etcd-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-csi-drivers namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-authentication-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-machine-approver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-insights namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-dns-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-samples-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-image-registry namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-machine-config-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-olm-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-openstack-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kni-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operator-lifecycle-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ovirt-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operators namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-vsphere-infra namespace | |
openshift-cluster-olm-operator |
deployment-controller |
cluster-olm-operator |
ScalingReplicaSet |
Scaled up replica set cluster-olm-operator-55b69c6c48 to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-nutanix-infra namespace | |
openshift-network-operator |
deployment-controller |
network-operator |
ScalingReplicaSet |
Scaled up replica set network-operator-6fcf4c966 to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-platform-infra namespace | |
openshift-kube-controller-manager-operator |
deployment-controller |
kube-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set kube-controller-manager-operator-78ff47c7c5 to 1 | |
openshift-dns-operator |
deployment-controller |
dns-operator |
ScalingReplicaSet |
Scaled up replica set dns-operator-86b8869b79 to 1 | |
openshift-kube-storage-version-migrator-operator |
deployment-controller |
kube-storage-version-migrator-operator |
ScalingReplicaSet |
Scaled up replica set kube-storage-version-migrator-operator-cd5474998 to 1 | |
openshift-service-ca-operator |
deployment-controller |
service-ca-operator |
ScalingReplicaSet |
Scaled up replica set service-ca-operator-5dc4688546 to 1 | |
openshift-kube-scheduler-operator |
deployment-controller |
openshift-kube-scheduler-operator |
ScalingReplicaSet |
Scaled up replica set openshift-kube-scheduler-operator-7485d55966 to 1 | |
openshift-apiserver-operator |
deployment-controller |
openshift-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set openshift-apiserver-operator-6d4655d9cf to 1 | |
openshift-controller-manager-operator |
deployment-controller |
openshift-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set openshift-controller-manager-operator-5f5f84757d to 1 | |
openshift-marketplace |
deployment-controller |
marketplace-operator |
ScalingReplicaSet |
Scaled up replica set marketplace-operator-6cc5b65c6b to 1 | |
openshift-etcd-operator |
deployment-controller |
etcd-operator |
ScalingReplicaSet |
Scaled up replica set etcd-operator-67bf55ccdd to 1 | |
| (x2) | openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-user-workload-monitoring namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-monitoring namespace | |
openshift-authentication-operator |
deployment-controller |
authentication-operator |
ScalingReplicaSet |
Scaled up replica set authentication-operator-755d954778 to 1 | |
| (x9) | assisted-installer |
default-scheduler |
assisted-installer-controller-5fwlz |
FailedScheduling |
no nodes available to schedule pods |
| (x12) | openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-55b69c6c48 |
FailedCreate |
Error creating: pods "cluster-olm-operator-55b69c6c48-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-machine-api namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-config-managed namespace | |
| (x12) | openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-78ff47c7c5 |
FailedCreate |
Error creating: pods "kube-controller-manager-operator-78ff47c7c5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-network-operator |
replicaset-controller |
network-operator-6fcf4c966 |
FailedCreate |
Error creating: pods "network-operator-6fcf4c966-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-cd5474998 |
FailedCreate |
Error creating: pods "kube-storage-version-migrator-operator-cd5474998-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-dns-operator |
replicaset-controller |
dns-operator-86b8869b79 |
FailedCreate |
Error creating: pods "dns-operator-86b8869b79-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-7485d55966 |
FailedCreate |
Error creating: pods "openshift-kube-scheduler-operator-7485d55966-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-5dc4688546 |
FailedCreate |
Error creating: pods "service-ca-operator-5dc4688546-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-6d4655d9cf |
FailedCreate |
Error creating: pods "openshift-apiserver-operator-6d4655d9cf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-etcd-operator |
replicaset-controller |
etcd-operator-67bf55ccdd |
FailedCreate |
Error creating: pods "etcd-operator-67bf55ccdd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-marketplace |
replicaset-controller |
marketplace-operator-6cc5b65c6b |
FailedCreate |
Error creating: pods "marketplace-operator-6cc5b65c6b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-5f5f84757d |
FailedCreate |
Error creating: pods "openshift-controller-manager-operator-5f5f84757d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-authentication-operator |
replicaset-controller |
authentication-operator-755d954778 |
FailedCreate |
Error creating: pods "authentication-operator-755d954778-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller-operator |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-operator-7b87b97578 to 1 | |
openshift-monitoring |
deployment-controller |
cluster-monitoring-operator |
ScalingReplicaSet |
Scaled up replica set cluster-monitoring-operator-756d64c8c4 to 1 | |
openshift-monitoring |
deployment-controller |
cluster-monitoring-operator |
ScalingReplicaSet |
Scaled up replica set cluster-monitoring-operator-756d64c8c4 to 1 | |
openshift-cluster-node-tuning-operator |
deployment-controller |
cluster-node-tuning-operator |
ScalingReplicaSet |
Scaled up replica set cluster-node-tuning-operator-ff6c9b66 to 1 | |
openshift-cluster-node-tuning-operator |
deployment-controller |
cluster-node-tuning-operator |
ScalingReplicaSet |
Scaled up replica set cluster-node-tuning-operator-ff6c9b66 to 1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
package-server-manager |
ScalingReplicaSet |
Scaled up replica set package-server-manager-5c696dbdcd to 1 | |
| (x14) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-76959b6567 |
FailedCreate |
Error creating: pods "cluster-version-operator-76959b6567-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-ingress-operator |
deployment-controller |
ingress-operator |
ScalingReplicaSet |
Scaled up replica set ingress-operator-c588d8cb4 to 1 | |
openshift-kube-apiserver-operator |
deployment-controller |
kube-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set kube-apiserver-operator-54984b6678 to 1 | |
| (x10) | openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-7b87b97578 |
FailedCreate |
Error creating: pods "csi-snapshot-controller-operator-7b87b97578-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-756d64c8c4 |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-756d64c8c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-756d64c8c4 |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-756d64c8c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-ff6c9b66 |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-ff6c9b66-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-ff6c9b66 |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-ff6c9b66-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x9) | openshift-ingress-operator |
replicaset-controller |
ingress-operator-c588d8cb4 |
FailedCreate |
Error creating: pods "ingress-operator-c588d8cb4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-image-registry |
deployment-controller |
cluster-image-registry-operator |
ScalingReplicaSet |
Scaled up replica set cluster-image-registry-operator-96c8c64b8 to 1 | |
| (x9) | openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-96c8c64b8 |
FailedCreate |
Error creating: pods "cluster-image-registry-operator-96c8c64b8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-operator-lifecycle-manager |
deployment-controller |
olm-operator |
ScalingReplicaSet |
Scaled up replica set olm-operator-6b56bd877c to 1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
catalog-operator |
ScalingReplicaSet |
Scaled up replica set catalog-operator-588944557d to 1 | |
| (x9) | openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-54984b6678 |
FailedCreate |
Error creating: pods "kube-apiserver-operator-54984b6678-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-config-operator |
deployment-controller |
openshift-config-operator |
ScalingReplicaSet |
Scaled up replica set openshift-config-operator-7c6bdb986f to 1 | |
| (x8) | openshift-config-operator |
replicaset-controller |
openshift-config-operator-7c6bdb986f |
FailedCreate |
Error creating: pods "openshift-config-operator-7c6bdb986f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-588944557d |
FailedCreate |
Error creating: pods "catalog-operator-588944557d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x8) | openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-6b56bd877c |
FailedCreate |
Error creating: pods "olm-operator-6b56bd877c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x10) | openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-5c696dbdcd |
FailedCreate |
Error creating: pods "package-server-manager-5c696dbdcd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
apiserver |
openshift-kube-apiserver |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
default |
apiserver |
openshift-kube-apiserver |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
default |
apiserver |
openshift-kube-apiserver |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
default |
apiserver |
openshift-kube-apiserver |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
default |
apiserver |
openshift-kube-apiserver |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
kube-system |
Required control plane pods have been created | ||||
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_8c108115-56e8-4452-afde-4cb167d3d3a2 stopped leading | |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_bc21311c-c0cd-44d8-8541-8447e36ee375 became leader | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_1e94f9ef-fde6-4fc9-a725-99a315ea27f8 became leader | |
| (x5) | assisted-installer |
default-scheduler |
assisted-installer-controller-5fwlz |
FailedScheduling |
no nodes available to schedule pods |
| (x6) | openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-78ff47c7c5 |
FailedCreate |
Error creating: pods "kube-controller-manager-operator-78ff47c7c5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-ingress-operator |
replicaset-controller |
ingress-operator-c588d8cb4 |
FailedCreate |
Error creating: pods "ingress-operator-c588d8cb4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-7b87b97578 |
FailedCreate |
Error creating: pods "csi-snapshot-controller-operator-7b87b97578-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-authentication-operator |
replicaset-controller |
authentication-operator-755d954778 |
FailedCreate |
Error creating: pods "authentication-operator-755d954778-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-5f5f84757d |
FailedCreate |
Error creating: pods "openshift-controller-manager-operator-5f5f84757d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29522340 | |
| (x6) | openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-55b69c6c48 |
FailedCreate |
Error creating: pods "cluster-olm-operator-55b69c6c48-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-marketplace |
replicaset-controller |
marketplace-operator-6cc5b65c6b |
FailedCreate |
Error creating: pods "marketplace-operator-6cc5b65c6b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x5) | openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-5dc4688546 |
FailedCreate |
Error creating: pods "service-ca-operator-5dc4688546-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-76959b6567 |
FailedCreate |
Error creating: pods "cluster-version-operator-76959b6567-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_9a6e8651-8e61-43d5-b940-d330c382b6a7 became leader | |
| (x6) | openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-54984b6678 |
FailedCreate |
Error creating: pods "kube-apiserver-operator-54984b6678-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found | |
| (x6) | openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-588944557d |
FailedCreate |
Error creating: pods "catalog-operator-588944557d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-config-operator |
replicaset-controller |
openshift-config-operator-7c6bdb986f |
FailedCreate |
Error creating: pods "openshift-config-operator-7c6bdb986f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-ff6c9b66 |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-ff6c9b66-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-96c8c64b8 |
FailedCreate |
Error creating: pods "cluster-image-registry-operator-96c8c64b8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-dns-operator |
replicaset-controller |
dns-operator-86b8869b79 |
FailedCreate |
Error creating: pods "dns-operator-86b8869b79-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-ff6c9b66 |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-ff6c9b66-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-6d4655d9cf |
FailedCreate |
Error creating: pods "openshift-apiserver-operator-6d4655d9cf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x2) | openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29522340 |
FailedCreate |
Error creating: pods "collect-profiles-29522340-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-5c696dbdcd |
FailedCreate |
Error creating: pods "package-server-manager-5c696dbdcd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-cd5474998 |
FailedCreate |
Error creating: pods "kube-storage-version-migrator-operator-cd5474998-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-6d4655d9cf |
SuccessfulCreate |
Created pod: openshift-apiserver-operator-6d4655d9cf-5f5g9 | |
openshift-apiserver-operator |
default-scheduler |
openshift-apiserver-operator-6d4655d9cf-5f5g9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
| (x6) | openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-6b56bd877c |
FailedCreate |
Error creating: pods "olm-operator-6b56bd877c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e) |
| (x6) | openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-7485d55966 |
FailedCreate |
Error creating: pods "openshift-kube-scheduler-operator-7485d55966-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-756d64c8c4 |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-756d64c8c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-service-ca-operator |
default-scheduler |
service-ca-operator-5dc4688546-sg75p |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-5dc4688546 |
SuccessfulCreate |
Created pod: service-ca-operator-5dc4688546-sg75p | |
| (x5) | openshift-network-operator |
replicaset-controller |
network-operator-6fcf4c966 |
FailedCreate |
Error creating: pods "network-operator-6fcf4c966-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-756d64c8c4 |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-756d64c8c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x6) | openshift-etcd-operator |
replicaset-controller |
etcd-operator-67bf55ccdd |
FailedCreate |
Error creating: pods "etcd-operator-67bf55ccdd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-ff6c9b66 |
SuccessfulCreate |
Created pod: cluster-node-tuning-operator-ff6c9b66-k8xp8 | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-76959b6567 |
SuccessfulCreate |
Created pod: cluster-version-operator-76959b6567-v49tq | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-7b87b97578 |
SuccessfulCreate |
Created pod: csi-snapshot-controller-operator-7b87b97578-9fpgj | |
openshift-dns-operator |
default-scheduler |
dns-operator-86b8869b79-lmqrr |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-olm-operator |
default-scheduler |
cluster-olm-operator-55b69c6c48-mzk89 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-authentication-operator |
replicaset-controller |
authentication-operator-755d954778 |
SuccessfulCreate |
Created pod: authentication-operator-755d954778-jrdqm | |
openshift-dns-operator |
replicaset-controller |
dns-operator-86b8869b79 |
SuccessfulCreate |
Created pod: dns-operator-86b8869b79-lmqrr | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-operator-7b87b97578-9fpgj |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-ingress-operator |
default-scheduler |
ingress-operator-c588d8cb4-nclxg |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-ff6c9b66 |
SuccessfulCreate |
Created pod: cluster-node-tuning-operator-ff6c9b66-k8xp8 | |
openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-54984b6678 |
SuccessfulCreate |
Created pod: kube-apiserver-operator-54984b6678-p5mdv | |
openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-5f5f84757d |
SuccessfulCreate |
Created pod: openshift-controller-manager-operator-5f5f84757d-dsfkk | |
openshift-config-operator |
replicaset-controller |
openshift-config-operator-7c6bdb986f |
SuccessfulCreate |
Created pod: openshift-config-operator-7c6bdb986f-fcnqs | |
openshift-cluster-version |
default-scheduler |
cluster-version-operator-76959b6567-v49tq |
Scheduled |
Successfully assigned openshift-cluster-version/cluster-version-operator-76959b6567-v49tq to master-0 | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-apiserver-operator |
default-scheduler |
kube-apiserver-operator-54984b6678-p5mdv |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-controller-manager-operator |
default-scheduler |
openshift-controller-manager-operator-5f5f84757d-dsfkk |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-ingress-operator |
replicaset-controller |
ingress-operator-c588d8cb4 |
SuccessfulCreate |
Created pod: ingress-operator-c588d8cb4-nclxg | |
openshift-authentication-operator |
default-scheduler |
authentication-operator-755d954778-jrdqm |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-olm-operator |
replicaset-controller |
cluster-olm-operator-55b69c6c48 |
SuccessfulCreate |
Created pod: cluster-olm-operator-55b69c6c48-mzk89 | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-config-operator |
default-scheduler |
openshift-config-operator-7c6bdb986f-fcnqs |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29522340 |
SuccessfulCreate |
Created pod: collect-profiles-29522340-8cp6h | |
openshift-marketplace |
replicaset-controller |
marketplace-operator-6cc5b65c6b |
SuccessfulCreate |
Created pod: marketplace-operator-6cc5b65c6b-wqxmh | |
openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-7485d55966 |
SuccessfulCreate |
Created pod: openshift-kube-scheduler-operator-7485d55966-wcpf8 | |
openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-78ff47c7c5 |
SuccessfulCreate |
Created pod: kube-controller-manager-operator-78ff47c7c5-xvzq9 | |
openshift-kube-controller-manager-operator |
default-scheduler |
kube-controller-manager-operator-78ff47c7c5-xvzq9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-network-operator |
replicaset-controller |
network-operator-6fcf4c966 |
SuccessfulCreate |
Created pod: network-operator-6fcf4c966-l24cg | |
openshift-etcd-operator |
default-scheduler |
etcd-operator-67bf55ccdd-pjm6n |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-etcd-operator |
replicaset-controller |
etcd-operator-67bf55ccdd |
SuccessfulCreate |
Created pod: etcd-operator-67bf55ccdd-pjm6n | |
openshift-marketplace |
default-scheduler |
marketplace-operator-6cc5b65c6b-wqxmh |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-588944557d |
SuccessfulCreate |
Created pod: catalog-operator-588944557d-kjh2v | |
openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-96c8c64b8 |
SuccessfulCreate |
Created pod: cluster-image-registry-operator-96c8c64b8-dtwmd | |
openshift-operator-lifecycle-manager |
default-scheduler |
catalog-operator-588944557d-kjh2v |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-image-registry |
default-scheduler |
cluster-image-registry-operator-96c8c64b8-dtwmd |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-756d64c8c4 |
SuccessfulCreate |
Created pod: cluster-monitoring-operator-756d64c8c4-ddgs9 | |
openshift-network-operator |
default-scheduler |
network-operator-6fcf4c966-l24cg |
Scheduled |
Successfully assigned openshift-network-operator/network-operator-6fcf4c966-l24cg to master-0 | |
openshift-kube-storage-version-migrator-operator |
default-scheduler |
kube-storage-version-migrator-operator-cd5474998-tckph |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-6b56bd877c |
SuccessfulCreate |
Created pod: olm-operator-6b56bd877c-tk8xm | |
openshift-kube-scheduler-operator |
default-scheduler |
openshift-kube-scheduler-operator-7485d55966-wcpf8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-756d64c8c4 |
SuccessfulCreate |
Created pod: cluster-monitoring-operator-756d64c8c4-ddgs9 | |
openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-5c696dbdcd |
SuccessfulCreate |
Created pod: package-server-manager-5c696dbdcd-t7n5b | |
openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-cd5474998 |
SuccessfulCreate |
Created pod: kube-storage-version-migrator-operator-cd5474998-tckph | |
openshift-operator-lifecycle-manager |
default-scheduler |
package-server-manager-5c696dbdcd-t7n5b |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-756d64c8c4-ddgs9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-756d64c8c4-ddgs9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
default-scheduler |
olm-operator-6b56bd877c-tk8xm |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
default-scheduler |
collect-profiles-29522340-8cp6h |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
assisted-installer |
default-scheduler |
assisted-installer-controller-5fwlz |
Scheduled |
Successfully assigned assisted-installer/assisted-installer-controller-5fwlz to master-0 | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Created |
Created container: kube-rbac-proxy-crio |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-master-0 |
Started |
Started container kube-rbac-proxy-crio |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-l24cg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" | |
assisted-installer |
kubelet |
assisted-installer-controller-5fwlz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad" | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-l24cg |
Started |
Started container network-operator | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-l24cg |
Created |
Created container: network-operator | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-l24cg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" in 4.117s (4.117s including waiting). Image size: 616473928 bytes. | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_91a8b342-95ed-4c52-bf83-8e3e47c3869a became leader | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-network-operator |
job-controller |
mtu-prober |
SuccessfulCreate |
Created pod: mtu-prober-wlg8w | |
openshift-network-operator |
default-scheduler |
mtu-prober-wlg8w |
Scheduled |
Successfully assigned openshift-network-operator/mtu-prober-wlg8w to master-0 | |
assisted-installer |
kubelet |
assisted-installer-controller-5fwlz |
Started |
Started container assisted-installer-controller | |
assisted-installer |
kubelet |
assisted-installer-controller-5fwlz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad" in 7.378s (7.378s including waiting). Image size: 682673937 bytes. | |
assisted-installer |
kubelet |
assisted-installer-controller-5fwlz |
Created |
Created container: assisted-installer-controller | |
openshift-network-operator |
kubelet |
mtu-prober-wlg8w |
Created |
Created container: prober | |
openshift-network-operator |
kubelet |
mtu-prober-wlg8w |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine | |
openshift-network-operator |
kubelet |
mtu-prober-wlg8w |
Started |
Started container prober | |
openshift-network-operator |
job-controller |
mtu-prober |
Completed |
Job completed | |
assisted-installer |
job-controller |
assisted-installer-controller |
Completed |
Job completed | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-multus namespace | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-9r5rl | |
openshift-multus |
default-scheduler |
multus-9r5rl |
Scheduled |
Successfully assigned openshift-multus/multus-9r5rl to master-0 | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-9r5rl | |
openshift-multus |
default-scheduler |
multus-9r5rl |
Scheduled |
Successfully assigned openshift-multus/multus-9r5rl to master-0 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d" | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-9nv95 | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-bnllz | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-bnllz | |
openshift-multus |
default-scheduler |
multus-additional-cni-plugins-9nv95 |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-9nv95 to master-0 | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-9nv95 | |
openshift-multus |
default-scheduler |
network-metrics-daemon-bnllz |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-bnllz to master-0 | |
openshift-multus |
default-scheduler |
network-metrics-daemon-bnllz |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-bnllz to master-0 | |
openshift-multus |
default-scheduler |
multus-additional-cni-plugins-9nv95 |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-9nv95 to master-0 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d" | |
openshift-multus |
kubelet |
multus-9r5rl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" | |
openshift-multus |
kubelet |
multus-9r5rl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" | |
openshift-multus |
replicaset-controller |
multus-admission-controller-7c64d55f8 |
SuccessfulCreate |
Created pod: multus-admission-controller-7c64d55f8-fzfsp | |
openshift-multus |
replicaset-controller |
multus-admission-controller-7c64d55f8 |
SuccessfulCreate |
Created pod: multus-admission-controller-7c64d55f8-fzfsp | |
openshift-multus |
default-scheduler |
multus-admission-controller-7c64d55f8-fzfsp |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-7c64d55f8 to 1 | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-7c64d55f8 to 1 | |
openshift-multus |
default-scheduler |
multus-admission-controller-7c64d55f8-fzfsp |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Created |
Created container: egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d" in 3.004s (3.004s including waiting). Image size: 523760203 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d" in 3.004s (3.004s including waiting). Image size: 523760203 bytes. | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ovn-kubernetes namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Created |
Created container: egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-host-network namespace | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-4z5g9 |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-4z5g9 to master-0 | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-control-plane-bb7ffbb8d-rj245 |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-rj245 to master-0 | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-4z5g9 | |
openshift-ovn-kubernetes |
replicaset-controller |
ovnkube-control-plane-bb7ffbb8d |
SuccessfulCreate |
Created pod: ovnkube-control-plane-bb7ffbb8d-rj245 | |
openshift-ovn-kubernetes |
deployment-controller |
ovnkube-control-plane |
ScalingReplicaSet |
Scaled up replica set ovnkube-control-plane-bb7ffbb8d to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-diagnostics namespace | |
openshift-multus |
kubelet |
multus-9r5rl |
Created |
Created container: kube-multus | |
openshift-multus |
kubelet |
multus-9r5rl |
Started |
Started container kube-multus | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7" in 8.843s (8.843s including waiting). Image size: 677894171 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Created |
Created container: cni-plugins | |
openshift-multus |
kubelet |
multus-9r5rl |
Created |
Created container: kube-multus | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Started |
Started container cni-plugins | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" | |
openshift-multus |
kubelet |
multus-9r5rl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" in 13.358s (13.358s including waiting). Image size: 1232696860 bytes. | |
openshift-multus |
kubelet |
multus-9r5rl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" in 13.358s (13.358s including waiting). Image size: 1232696860 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-rj245 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Started |
Started container cni-plugins | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-rj245 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Created |
Created container: cni-plugins | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-rj245 |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-9r5rl |
Started |
Started container kube-multus | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-rj245 |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7" in 8.843s (8.843s including waiting). Image size: 677894171 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78" | |
openshift-network-diagnostics |
replicaset-controller |
network-check-source-7d8f4c8c66 |
SuccessfulCreate |
Created pod: network-check-source-7d8f4c8c66-fc8n7 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78" | |
openshift-network-diagnostics |
default-scheduler |
network-check-source-7d8f4c8c66-fc8n7 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-network-diagnostics |
deployment-controller |
network-check-source |
ScalingReplicaSet |
Scaled up replica set network-check-source-7d8f4c8c66 to 1 | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-f25s7 | |
openshift-network-diagnostics |
default-scheduler |
network-check-target-f25s7 |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-f25s7 to master-0 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Created |
Created container: bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Created |
Created container: bond-cni-plugin | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-node-identity namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Started |
Started container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78" in 1.577s (1.577s including waiting). Image size: 406416461 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78" in 1.577s (1.577s including waiting). Image size: 406416461 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Started |
Started container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe" in 1.234s (1.234s including waiting). Image size: 402172859 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe" in 1.234s (1.234s including waiting). Image size: 402172859 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Started |
Started container routeoverride-cni | |
openshift-network-node-identity |
kubelet |
network-node-identity-xwftw |
FailedMount |
MountVolume.SetUp failed for volume "webhook-cert" : secret "network-node-identity-cert" not found | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Created |
Created container: routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" | |
openshift-network-node-identity |
daemonset-controller |
network-node-identity |
SuccessfulCreate |
Created pod: network-node-identity-xwftw | |
openshift-network-node-identity |
default-scheduler |
network-node-identity-xwftw |
Scheduled |
Successfully assigned openshift-network-node-identity/network-node-identity-xwftw to master-0 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Started |
Started container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Created |
Created container: routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" | |
openshift-network-node-identity |
kubelet |
network-node-identity-xwftw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" | |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" in 22.853s (22.853s including waiting). Image size: 1631983282 bytes. | |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-bb7ffbb8d-rj245 became leader | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Created |
Created container: kubecfg-setup | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Created |
Created container: whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" in 17.756s (17.756s including waiting). Image size: 870929735 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Created |
Created container: whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" in 17.756s (17.756s including waiting). Image size: 870929735 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Created |
Created container: ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-rj245 |
Started |
Started container ovnkube-cluster-manager | |
openshift-network-node-identity |
kubelet |
network-node-identity-xwftw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" in 17.387s (17.387s including waiting). Image size: 1631983282 bytes. | |
openshift-network-node-identity |
kubelet |
network-node-identity-xwftw |
Created |
Created container: webhook | |
openshift-network-node-identity |
kubelet |
network-node-identity-xwftw |
Started |
Started container webhook | |
openshift-network-node-identity |
kubelet |
network-node-identity-xwftw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-xwftw |
Created |
Created container: approver | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-rj245 |
Created |
Created container: ovnkube-cluster-manager | |
openshift-network-node-identity |
kubelet |
network-node-identity-xwftw |
Started |
Started container approver | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-rj245 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" in 22.707s (22.707s including waiting). Image size: 1631983282 bytes. | |
openshift-network-node-identity |
master-0_5717552c-6c30-45da-a2d9-2e3f4ce0bcba |
ovnkube-identity |
LeaderElection |
master-0_5717552c-6c30-45da-a2d9-2e3f4ce0bcba became leader | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Created |
Created container: ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Created |
Created container: northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Created |
Created container: nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Created |
Created container: whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Started |
Started container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Created |
Created container: whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Started |
Started container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Created |
Created container: sbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulDelete |
Deleted pod: ovnkube-node-4z5g9 | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-4z5g9 |
Started |
Started container sbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-9nv95 |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-vdgrn |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-vdgrn to master-0 | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-vdgrn | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Created |
Created container: kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Created |
Created container: ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Created |
Created container: ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Created |
Created container: kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Created |
Created container: nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Created |
Created container: kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Created |
Created container: northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Started |
Started container northd | |
| (x8) | openshift-cluster-version |
kubelet |
cluster-version-operator-76959b6567-v49tq |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Created |
Created container: sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Started |
Started container sbdb | |
| (x7) | openshift-network-diagnostics |
kubelet |
network-check-target-f25s7 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-bpwhf" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-vdgrn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
| (x18) | openshift-network-diagnostics |
kubelet |
network-check-target-f25s7 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
default |
ovnkube-csr-approver-controller |
csr-kmprm |
CSRApproved |
CSR "csr-kmprm" has been approved | |
default |
ovnk-controlplane |
master-0 |
ErrorAddingResource |
[k8s.ovn.org/node-chassis-id annotation not found for node master-0, error getting gateway config for node master-0: k8s.ovn.org/l3-gateway-config annotation not found for node "master-0", failed to update chassis to local for local node master-0, error: failed to parse node chassis-id for node - master-0, error: k8s.ovn.org/node-chassis-id annotation not found for node master-0] | |
default |
ovnkube-csr-approver-controller |
csr-t2trz |
CSRApproved |
CSR "csr-t2trz" has been approved | |
openshift-cluster-olm-operator |
default-scheduler |
cluster-olm-operator-55b69c6c48-mzk89 |
Scheduled |
Successfully assigned openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-mzk89 to master-0 | |
openshift-kube-storage-version-migrator-operator |
default-scheduler |
kube-storage-version-migrator-operator-cd5474998-tckph |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-tckph to master-0 | |
openshift-etcd-operator |
default-scheduler |
etcd-operator-67bf55ccdd-pjm6n |
Scheduled |
Successfully assigned openshift-etcd-operator/etcd-operator-67bf55ccdd-pjm6n to master-0 | |
openshift-multus |
default-scheduler |
multus-admission-controller-7c64d55f8-fzfsp |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-7c64d55f8-fzfsp to master-0 | |
openshift-operator-lifecycle-manager |
default-scheduler |
olm-operator-6b56bd877c-tk8xm |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-tk8xm to master-0 | |
openshift-multus |
default-scheduler |
multus-admission-controller-7c64d55f8-fzfsp |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-7c64d55f8-fzfsp to master-0 | |
openshift-dns-operator |
default-scheduler |
dns-operator-86b8869b79-lmqrr |
Scheduled |
Successfully assigned openshift-dns-operator/dns-operator-86b8869b79-lmqrr to master-0 | |
openshift-controller-manager-operator |
default-scheduler |
openshift-controller-manager-operator-5f5f84757d-dsfkk |
Scheduled |
Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-dsfkk to master-0 | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-756d64c8c4-ddgs9 |
Scheduled |
Successfully assigned openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9 to master-0 | |
openshift-network-operator |
default-scheduler |
iptables-alerter-v2h9q |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-v2h9q to master-0 | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-operator-7b87b97578-9fpgj |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-9fpgj to master-0 | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-v2h9q | |
openshift-ingress-operator |
default-scheduler |
ingress-operator-c588d8cb4-nclxg |
Scheduled |
Successfully assigned openshift-ingress-operator/ingress-operator-c588d8cb4-nclxg to master-0 | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8 to master-0 | |
openshift-apiserver-operator |
default-scheduler |
openshift-apiserver-operator-6d4655d9cf-5f5g9 |
Scheduled |
Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-5f5g9 to master-0 | |
openshift-authentication-operator |
default-scheduler |
authentication-operator-755d954778-jrdqm |
Scheduled |
Successfully assigned openshift-authentication-operator/authentication-operator-755d954778-jrdqm to master-0 | |
openshift-marketplace |
default-scheduler |
marketplace-operator-6cc5b65c6b-wqxmh |
Scheduled |
Successfully assigned openshift-marketplace/marketplace-operator-6cc5b65c6b-wqxmh to master-0 | |
openshift-service-ca-operator |
default-scheduler |
service-ca-operator-5dc4688546-sg75p |
Scheduled |
Successfully assigned openshift-service-ca-operator/service-ca-operator-5dc4688546-sg75p to master-0 | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-756d64c8c4-ddgs9 |
Scheduled |
Successfully assigned openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ddgs9 to master-0 | |
openshift-kube-scheduler-operator |
default-scheduler |
openshift-kube-scheduler-operator-7485d55966-wcpf8 |
Scheduled |
Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-wcpf8 to master-0 | |
openshift-image-registry |
default-scheduler |
cluster-image-registry-operator-96c8c64b8-dtwmd |
Scheduled |
Successfully assigned openshift-image-registry/cluster-image-registry-operator-96c8c64b8-dtwmd to master-0 | |
openshift-operator-lifecycle-manager |
default-scheduler |
package-server-manager-5c696dbdcd-t7n5b |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-t7n5b to master-0 | |
openshift-kube-apiserver-operator |
default-scheduler |
kube-apiserver-operator-54984b6678-p5mdv |
Scheduled |
Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-p5mdv to master-0 | |
openshift-kube-controller-manager-operator |
default-scheduler |
kube-controller-manager-operator-78ff47c7c5-xvzq9 |
Scheduled |
Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-xvzq9 to master-0 | |
openshift-config-operator |
default-scheduler |
openshift-config-operator-7c6bdb986f-fcnqs |
Scheduled |
Successfully assigned openshift-config-operator/openshift-config-operator-7c6bdb986f-fcnqs to master-0 | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-k8xp8 to master-0 | |
openshift-operator-lifecycle-manager |
default-scheduler |
catalog-operator-588944557d-kjh2v |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-588944557d-kjh2v to master-0 | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-p5mdv |
Started |
Started container kube-apiserver-operator | |
openshift-cluster-olm-operator |
multus |
cluster-olm-operator-55b69c6c48-mzk89 |
AddedInterface |
Add eth0 [10.128.0.11/23] from ovn-kubernetes | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-tckph |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144" | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-wcpf8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" | |
openshift-kube-storage-version-migrator-operator |
multus |
kube-storage-version-migrator-operator-cd5474998-tckph |
AddedInterface |
Add eth0 [10.128.0.23/23] from ovn-kubernetes | |
openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-pjm6n |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" | |
openshift-etcd-operator |
multus |
etcd-operator-67bf55ccdd-pjm6n |
AddedInterface |
Add eth0 [10.128.0.12/23] from ovn-kubernetes | |
openshift-config-operator |
multus |
openshift-config-operator-7c6bdb986f-fcnqs |
AddedInterface |
Add eth0 [10.128.0.22/23] from ovn-kubernetes | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-fcnqs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9cc42212fb15c1f3e6a88acaaa4919c9693be3c6099ea849d28855e231dc9e44" | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-5f5g9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5" | |
openshift-apiserver-operator |
multus |
openshift-apiserver-operator-6d4655d9cf-5f5g9 |
AddedInterface |
Add eth0 [10.128.0.7/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-p5mdv |
Created |
Created container: kube-apiserver-operator | |
openshift-controller-manager-operator |
multus |
openshift-controller-manager-operator-5f5f84757d-dsfkk |
AddedInterface |
Add eth0 [10.128.0.9/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-p5mdv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-dsfkk |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88" | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-mzk89 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-xvzq9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" | |
openshift-kube-controller-manager-operator |
multus |
kube-controller-manager-operator-78ff47c7c5-xvzq9 |
AddedInterface |
Add eth0 [10.128.0.13/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
multus |
kube-apiserver-operator-54984b6678-p5mdv |
AddedInterface |
Add eth0 [10.128.0.19/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-operator-7b87b97578-9fpgj |
AddedInterface |
Add eth0 [10.128.0.6/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
multus |
openshift-kube-scheduler-operator-7485d55966-wcpf8 |
AddedInterface |
Add eth0 [10.128.0.20/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-9fpgj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1" | |
openshift-network-operator |
kubelet |
iptables-alerter-v2h9q |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954" | |
openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-jrdqm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963" | |
openshift-authentication-operator |
multus |
authentication-operator-755d954778-jrdqm |
AddedInterface |
Add eth0 [10.128.0.24/23] from ovn-kubernetes | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-sg75p |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e" | |
openshift-service-ca-operator |
multus |
service-ca-operator-5dc4688546-sg75p |
AddedInterface |
Add eth0 [10.128.0.5/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-high-cpu-usage-alert-controller-highcpuusagealertcontroller |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/cpu-utilization -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-54984b6678-p5mdv_8b52e755-206f-4827-a7e2-b327c86a7e48 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.18.32" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-serviceaccountissuercontroller |
kube-apiserver-operator |
ServiceAccountIssuer |
Issuer set to default value "https://kubernetes.default.svc" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodeObserved |
Observed new master node master-0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready"),Progressing changed from Unknown to False ("All is well"),Upgradeable changed from Unknown to True ("KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced."),EvaluationConditionsDetected changed from Unknown to False ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.32"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0") | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/kube-control-plane-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
| (x5) | openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing | |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
| (x5) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ddgs9 |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
| (x5) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-wqxmh |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
| (x5) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-dtwmd |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-t7n5b |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/: configmaps "loadbalancer-serving-ca" already exists | |
| (x5) | openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-lmqrr |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
| (x5) | openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
| (x5) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-nclxg |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-tk8xm |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found |
| (x5) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-kjh2v |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found |
| (x5) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ddgs9 |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
| (x5) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/aggregator-client-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-InternalLoadBalancerServing-certrotationcontroller |
kube-apiserver-operator |
RotationError |
configmaps "loadbalancer-serving-ca" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist | |
| (x3) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing | |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing | |
default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory | |
default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: secret doesn't exist | |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nCertRotation_KubeSchedulerClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nCertRotation_KubeSchedulerClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-KubeSchedulerClient-certrotationcontroller |
kube-apiserver-operator |
RotationError |
configmaps "kube-control-plane-signer-ca" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-network-operator |
kubelet |
iptables-alerter-v2h9q |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-mzk89 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" already present on machine | |
openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-jrdqm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nCertRotation_KubeSchedulerClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
SecretCreateFailed |
Failed to create Secret/bound-service-account-signing-key -n openshift-kube-apiserver: secrets "bound-service-account-signing-key" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-fcnqs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9cc42212fb15c1f3e6a88acaaa4919c9693be3c6099ea849d28855e231dc9e44" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ServiceCreated |
Created Service/apiserver -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/: configmaps "kube-control-plane-signer-ca" already exists |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing | |
openshift-network-diagnostics |
kubelet |
network-check-target-f25s7 |
Started |
Started container network-check-target-container | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-mzk89 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing | |
openshift-network-diagnostics |
kubelet |
network-check-target-f25s7 |
Created |
Created container: network-check-target-container | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-mzk89 |
Started |
Started container copy-catalogd-manifests | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-fcnqs |
Created |
Created container: openshift-api | |
openshift-network-diagnostics |
kubelet |
network-check-target-f25s7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-ControlPlaneNodeAdminClient-certrotationcontroller |
kube-apiserver-operator |
RotationError |
configmaps "kube-control-plane-signer-ca" already exists | |
openshift-network-diagnostics |
multus |
network-check-target-f25s7 |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-fcnqs |
Started |
Started container openshift-api | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-fcnqs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b" | |
openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-jrdqm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963" in 1.149s (1.149s including waiting). Image size: 508050651 bytes. | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-mzk89 |
Created |
Created container: copy-catalogd-manifests | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-cd5474998-tckph_54fecf49-41f4-4ac5-80ec-ed18d88566fd became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nCertRotation_KubeSchedulerClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nCertRotation_KubeSchedulerClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "build": map[string]any{ + "buildDefaults": map[string]any{"resources": map[string]any{}}, + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e45a7281a6"...), + }, + }, + "controllers": []any{ + string("openshift.io/build"), string("openshift.io/build-config-change"), + string("openshift.io/builder-rolebindings"), + string("openshift.io/builder-serviceaccount"), + string("-openshift.io/default-rolebindings"), string("openshift.io/deployer"), + string("openshift.io/deployer-rolebindings"), + string("openshift.io/deployer-serviceaccount"), + string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), + string("openshift.io/image-puller-rolebindings"), + string("openshift.io/image-signature-import"), + string("openshift.io/image-trigger"), string("openshift.io/ingress-ip"), + string("openshift.io/ingress-to-route"), + string("openshift.io/origin-namespace"), ..., + }, + "deployer": map[string]any{ + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:45836e9b83"...), + }, + }, + "featureGates": []any{string("BuildCSIVolumes=true")}, + "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, } | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-5f5f84757d-dsfkk_a4020b0d-4828-4bc0-be49-7233c59d8f1b became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-dc99ff586 to 1 | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-74b6595c6d to 1 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/openshift-global-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" to "All is well" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreateFailed |
Failed to create Deployment.apps/route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-78ff47c7c5-xvzq9_09a1f497-e2ba-46b6-8a0f-433c36b96ec0 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.18.32"}] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.18.32" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodeObserved |
Observed new master node master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.32"}] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kube-controller-manager-node |
kube-controller-manager-operator |
MasterNodesReadyChanged |
All master nodes are ready | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "operator" changed from "" to "4.18.32" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded set to False ("WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to BuildCSIVolumes=true | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-6d4655d9cf-5f5g9_9f0f58c0-7dd4-4d2c-972b-01827ed5e736 became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller |
csi-snapshot-controller-operator |
DeploymentCreated |
Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources |
csi-snapshot-controller-operator |
ServiceAccountCreated |
Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-7485d55966-wcpf8_bb273c86-1924-4a1f-a542-258ee34aa707 became leader | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator |
kube-storage-version-migrator-operator |
DeploymentCreated |
Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources |
kube-storage-version-migrator-operator |
NamespaceCreated |
Created Namespace/openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.32"}] | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorVersionChanged |
clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.18.32" | |
openshift-kube-storage-version-migrator |
deployment-controller |
migrator |
ScalingReplicaSet |
Scaled up replica set migrator-5bd989df77 to 1 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-67bf55ccdd-pjm6n_5649cc5b-33fc-4a78-9e2b-2c538f842147 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodeObserved |
Observed new master node master-0 |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.18.32" |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.18.32"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Upgradeable changed from Unknown to True ("All is well") | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kube-scheduler-node |
openshift-kube-scheduler-operator |
MasterNodesReadyChanged |
All master nodes are ready |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("All is well") | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-5dc4688546-sg75p_4a0b29f3-b788-4bc2-b7c2-18cbd8542c3f became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "All is well" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-route-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator namespace | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, } | |
openshift-kube-storage-version-migrator |
replicaset-controller |
migrator-5bd989df77 |
SuccessfulCreate |
Created pod: migrator-5bd989df77-hrl5d | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-hrl5d |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0" | |
openshift-kube-storage-version-migrator |
multus |
migrator-5bd989df77-hrl5d |
AddedInterface |
Add eth0 [10.128.0.27/23] from ovn-kubernetes | |
openshift-kube-storage-version-migrator |
default-scheduler |
migrator-5bd989df77-hrl5d |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator/migrator-5bd989df77-hrl5d to master-0 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0]",Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; ") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded set to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.18.32"}] | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-7b87b97578-9fpgj_00e4c43d-640a-4d4c-9c5f-52ddbd02a366 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-74b6595c6d |
SuccessfulCreate |
Created pod: csi-snapshot-controller-74b6595c6d-q4766 | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-74b6595c6d-q4766 |
AddedInterface |
Add eth0 [10.128.0.28/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-74b6595c6d-q4766 |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-q4766 to master-0 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorVersionChanged |
clusteroperator/etcd version "raw-internal" changed from "" to "4.18.32" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-q4766 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "operator" changed from "" to "4.18.32" | |
openshift-controller-manager |
replicaset-controller |
controller-manager-dc99ff586 |
SuccessfulCreate |
Created pod: controller-manager-dc99ff586-qjjb5 | |
| (x7) | openshift-controller-manager |
replicaset-controller |
controller-manager-dc99ff586 |
FailedCreate |
Error creating: pods "controller-manager-dc99ff586-" is forbidden: error looking up service account openshift-controller-manager/openshift-controller-manager-sa: serviceaccount "openshift-controller-manager-sa" not found |
openshift-controller-manager |
default-scheduler |
controller-manager-dc99ff586-qjjb5 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-dc99ff586-qjjb5 to master-0 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"audit\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "apiServerArguments": map[string]any{ + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + }, + "projectConfig": map[string]any{"projectRequestMessage": string("")}, + "routingConfig": map[string]any{"subdomain": string("apps.sno.openstack.lab")}, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, + "storageConfig": map[string]any{"urls": []any{string("https://192.168.32.10:2379")}}, } | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
RoutingConfigSubdomainChanged |
Domain changed from "" to "apps.sno.openstack.lab" | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ServiceAccountCreated |
Created ServiceAccount/service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
NamespaceCreated |
Created Namespace/openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well") | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379 | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]",Progressing changed from Unknown to False ("NodeInstallerProgressing: 1 node is at revision 0"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-755d954778-jrdqm_f33a88bc-e211-4cb3-ba23-73187e0e1c33 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
CABundleUpdateRequired |
"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert: configmap doesn't exist | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded changed from Unknown to False ("All is well"),Available changed from Unknown to False ("APIServicesAvailable: endpoints \"api\" not found") | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:deployer because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:deployer because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated extendedArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca namespace | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "extendedArguments": map[string]any{ + "cluster-cidr": []any{string("10.128.0.0/16")}, + "cluster-name": []any{string("sno-2clbh")}, + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + "service-cluster-ip-range": []any{string("172.30.0.0/16")}, + }, + "featureGates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), + string("DisableKubeletCloudCredentialProviders=true"), + string("GCPLabelsTags=true"), string("HardwareSpeed=true"), + string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), + string("ManagedBootImages=true"), string("ManagedBootImagesAWS=true"), + string("MultiArchInstallAWS=true"), ..., + }, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, } | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from Unknown to False ("All is well") | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-69bd477586 to 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-69bd477586 |
SuccessfulCreate |
Created pod: route-controller-manager-69bd477586-66ml6 | |
openshift-network-operator |
kubelet |
iptables-alerter-v2h9q |
Created |
Created container: iptables-alerter | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing | |
openshift-network-operator |
kubelet |
iptables-alerter-v2h9q |
Started |
Started container iptables-alerter | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-69bd477586-66ml6 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-69bd477586-66ml6 to master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from Unknown to False ("RevisionControllerDegraded: configmap \"audit\" not found"),Available changed from Unknown to False ("OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.32"}] | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-6956dbf788 to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-dc99ff586 to 0 from 1 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "controlPlane": map[string]any{"replicas": float64(1)}, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, } | |
openshift-controller-manager |
replicaset-controller |
controller-manager-dc99ff586 |
SuccessfulDelete |
Deleted pod: controller-manager-dc99ff586-qjjb5 | |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-dc99ff586-qjjb5 |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-dc99ff586-qjjb5 |
FailedMount |
MountVolume.SetUp failed for volume "config" : configmap "config" not found |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: ",Progressing changed from Unknown to False ("All is well") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
TargetUpdateRequired |
"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: secret doesn't exist | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
SecretCreated |
Created Secret/signing-key -n openshift-service-ca because it was missing | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6956dbf788 |
SuccessfulCreate |
Created pod: controller-manager-6956dbf788-5r68h | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-controller-manager |
default-scheduler |
controller-manager-6956dbf788-5r68h |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodeObserved |
Observed new master node master-0 |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
etcd-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIAudiences |
service account issuer changed from to https://kubernetes.default.svc | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-scheduler because it changed | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379 | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-service-ca |
replicaset-controller |
service-ca-676cd8b9b5 |
SuccessfulCreate |
Created pod: service-ca-676cd8b9b5-bfm5s | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
DeploymentCreated |
Created Deployment.apps/service-ca -n openshift-service-ca because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"apiServerArguments\": map[string]any{\n+\u00a0\t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+\u00a0\t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\t\"etcd-servers\": []any{string(\"https://192.168.32.10:2379\")},\n+\u00a0\t\t\t\"tls-cipher-suites\": []any{\n+\u00a0\t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+\u00a0\t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t},\n\u00a0\u00a0)\n" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
TargetConfigDeleted |
Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTemplates |
templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"] | |
openshift-service-ca |
deployment-controller |
service-ca |
ScalingReplicaSet |
Scaled up replica set service-ca-676cd8b9b5 to 1 | |
openshift-service-ca |
default-scheduler |
service-ca-676cd8b9b5-bfm5s |
Scheduled |
Successfully assigned openshift-service-ca/service-ca-676cd8b9b5-bfm5s to master-0 | |
openshift-service-ca-operator |
service-ca-operator-resource-sync-controller-resourcesynccontroller |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nCertRotation_KubeSchedulerClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nCertRotation_KubeSchedulerClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAuditProfile |
AuditProfile changed from '%!s(<nil>)' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]' | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIServerURL |
loginURL changed from to https://api.sno.openstack.lab:6443 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTokenConfig |
accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400) | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\"oauthConfig\": map[string]any{\n+\u00a0\t\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n+\u00a0\t\t\t\"templates\": map[string]any{\n+\u00a0\t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tokenConfig\": map[string]any{\n+\u00a0\t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+\u00a0\t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n+\u00a0\t\t\"serverArguments\": map[string]any{\n+\u00a0\t\t\t\"audit-log-format\": []any{string(\"json\")},\n+\u00a0\t\t\t\"audit-log-maxbackup\": []any{string(\"10\")},\n+\u00a0\t\t\t\"audit-log-maxsize\": []any{string(\"100\")},\n+\u00a0\t\t\t\"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")},\n+\u00a0\t\t\t\"audit-policy-file\": []any{string(\"/var/run/configmaps/audit/audit.\"...)},\n+\u00a0\t\t},\n+\u00a0\t\t\"servingInfo\": map[string]any{\n+\u00a0\t\t\t\"cipherSuites\": []any{\n+\u00a0\t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+\u00a0\t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+\u00a0\t},\n\u00a0\u00a0)\n" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
NamespaceUpdated |
Updated Namespace/openshift-etcd because it changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-node |
etcd-operator |
MasterNodesReadyChanged |
All master nodes are ready |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-dc99ff586-qjjb5 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-dc99ff586-qjjb5 |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-etcd because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-6956dbf788 to 0 from 1 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-defrag-controller-defragcontroller |
etcd-operator |
DefragControllerUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-6fcbb7f9bd to 1 from 0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
TargetConfigDeleted |
Deleted target configmap openshift-kube-apiserver/kubelet-serving-ca because source config does not exist | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-config because it was missing | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-hrl5d |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0" in 3.776s (3.776s including waiting). Image size: 438101353 bytes. | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-hrl5d |
Created |
Created container: migrator | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
NamespaceCreated |
Created Namespace/openshift-apiserver because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " | |
openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-fcnqs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b" in 5.018s (5.018s including waiting). Image size: 490819380 bytes. | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-ca-bundle -n openshift-config because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-env-var-controller |
etcd-operator |
EnvVarControllerUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.openshift-global-ca.configmap | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-mzk89 |
Created |
Created container: copy-operator-controller-manifests | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6fcbb7f9bd |
SuccessfulCreate |
Created pod: controller-manager-6fcbb7f9bd-gdt9b | |
openshift-controller-manager |
default-scheduler |
controller-manager-6fcbb7f9bd-gdt9b |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6956dbf788 |
SuccessfulDelete |
Deleted pod: controller-manager-6956dbf788-5r68h | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-q4766 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c" in 3.588s (3.588s including waiting). Image size: 458531660 bytes. | |
openshift-controller-manager |
default-scheduler |
controller-manager-6956dbf788-5r68h |
FailedScheduling |
skip schedule deleting pod: openshift-controller-manager/controller-manager-6956dbf788-5r68h | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver namespace | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-mzk89 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b" in 5.009s (5.009s including waiting). Image size: 489891070 bytes. | |
openshift-service-ca |
multus |
service-ca-676cd8b9b5-bfm5s |
AddedInterface |
Add eth0 [10.128.0.31/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
TargetConfigDeleted |
Deleted target configmap openshift-config-managed/kubelet-serving-ca because source config does not exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-requests -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.18.32" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.18.32"} {"csi-snapshot-controller" "4.18.32"}] | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-74b6595c6d-q4766 |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-74b6595c6d-q4766 became leader | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
ConfigOperatorStatusChanged |
Operator conditions defaulted: [{OperatorAvailable True 2026-02-17 15:03:03 +0000 UTC AsExpected } {OperatorProgressing False 2026-02-17 15:03:03 +0000 UTC AsExpected } {OperatorUpgradeable True 2026-02-17 15:03:03 +0000 UTC AsExpected }] | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "operator" changed from "" to "4.18.32" | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded set to Unknown (""),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"feature-gates" ""} {"operator" "4.18.32"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
CustomResourceDefinitionUpdated |
Updated CustomResourceDefinition.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: ",Progressing changed from Unknown to False ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metric-serving-ca -n openshift-etcd-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration-v1beta3 because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration-v1beta3 because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/api-usage -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-hrl5d |
Started |
Started container migrator | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-hrl5d |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0" already present on machine | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-hrl5d |
Created |
Created container: graceful-termination | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5bd989df77-hrl5d |
Started |
Started container graceful-termination | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-mzk89 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/audit-errors -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-basic -n openshift-kube-apiserver because it was missing | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-mzk89 |
Started |
Started container copy-operator-controller-manifests | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceCreated |
Created Service/scheduler -n openshift-kube-scheduler because it was missing | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "feature-gates" changed from "" to "4.18.32" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-controller-manager because it changed | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-7c6bdb986f-fcnqs_ee7da6ed-9c2e-407b-a9c7-b63eeeeb2ea8 became leader | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-controller-manager |
default-scheduler |
controller-manager-6fcbb7f9bd-gdt9b |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-6fcbb7f9bd-gdt9b to master-0 | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.18.32" | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-apiserver because it was missing | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded changed from Unknown to False ("All is well"),status.versions changed from [{"feature-gates" ""} {"operator" "4.18.32"}] to [{"feature-gates" "4.18.32"} {"operator" "4.18.32"}] | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ServiceCreated |
Created Service/api -n openshift-apiserver because it was missing | |
| (x6) | openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing | |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-t7n5b |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/podsecurity -n openshift-kube-apiserver because it was missing | |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-kjh2v |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorVersionChanged |
clusteroperator/service-ca version "operator" changed from "" to "4.18.32" | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.18.32"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing | |
| (x6) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ddgs9 |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
| (x6) | openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
| (x6) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-wqxmh |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing | |
| (x6) | openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-lmqrr |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
| (x6) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-dtwmd |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found |
| (x6) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-tk8xm |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found |
| (x6) | openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
| (x6) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ddgs9 |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceUpdated |
Updated Service/etcd -n openshift-etcd because it changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
RequiredInstallerResourcesMissing |
configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
| (x6) | openshift-cluster-version |
kubelet |
cluster-version-operator-76959b6567-v49tq |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found |
| (x6) | openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
| (x6) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-nclxg |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-676cd8b9b5-bfm5s_15506d1c-66c9-4471-83c5-4c2f35d26be3 became leader | |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-6fcbb7f9bd-gdt9b |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/etcd -n openshift-etcd-operator because it was missing | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveServiceCAConfigMap |
observed change in config | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator |
authentication-operator |
CSRApproval |
The CSR "system:openshift:openshift-authenticator-ddq7l" has been approved | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
CSRCreated |
A csr "system:openshift:openshift-authenticator-ddq7l" is created for OpenShiftAuthenticatorCertRequester | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "extendedArguments": map[string]any{"cluster-cidr": []any{string("10.128.0.0/16")}, "cluster-name": []any{string("sno-2clbh")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "service-cluster-ip-range": []any{string("172.30.0.0/16")}}, "featureGates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, + "serviceServingCert": map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), + }, "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")}, } | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/etcd-minimal -n openshift-etcd-operator because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-mzk89 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1" in 2.443s (2.443s including waiting). Image size: 505990615 bytes. | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well") | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " to "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-1 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver: namespaces "openshift-oauth-apiserver" not found | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found" | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/audit -n openshift-oauth-apiserver: namespaces "openshift-oauth-apiserver" not found | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-oauth-apiserver because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-oauth-apiserver namespace | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceCreated |
Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-apiserver because it was missing | |
openshift-cluster-olm-operator |
cluster-olm-operator |
cluster-olm-operator-lock |
LeaderElection |
cluster-olm-operator-55b69c6c48-mzk89_11414aa6-2307-4c9a-aba9-b8c74506458c became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-olm-operator"} {"operator.openshift.io" "olms" "" "cluster"}] to [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}],status.versions changed from [] to [{"operator" "4.18.32"}] | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorVersionChanged |
clusteroperator/olm version "operator" changed from "" to "4.18.32" | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftAuthenticatorCertRequester is available | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-1 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded changed from Unknown to False ("All is well") | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
NamespaceCreated |
Created Namespace/openshift-operator-controller because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
NamespaceCreated |
Created Namespace/openshift-catalogd because it was missing | |
| (x5) | openshift-controller-manager |
kubelet |
controller-manager-6fcbb7f9bd-gdt9b |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "admission": map[string]any{ + "pluginConfig": map[string]any{ + "PodSecurity": map[string]any{"configuration": map[string]any{...}}, + "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, + "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, + }, + }, + "apiServerArguments": map[string]any{ + "api-audiences": []any{string("https://kubernetes.default.svc")}, + "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + "goaway-chance": []any{string("0")}, + "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, + "send-retry-after-while-not-ready-once": []any{string("true")}, + "service-account-issuer": []any{string("https://kubernetes.default.svc")}, + "service-account-jwks-uri": []any{string("https://api.sno.openstack.lab:6443/openid/v1/jwks")}, + "shutdown-delay-duration": []any{string("0s")}, + }, + "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, + "gracefulTerminationDuration": string("15"), + "servicesSubnet": string("172.30.0.0/16"), + "servingInfo": map[string]any{ + "bindAddress": string("0.0.0.0:6443"), + "bindNetwork": string("tcp4"), + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + "namedCertificates": []any{ + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-resou"...), + "keyFile": string("/etc/kubernetes/static-pod-resou"...), + }, + }, + }, } |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.32.10:2379,https://localhost:2379 |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 |
| (x81) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMissing |
no observedConfig |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-catalogd namespace | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-authentication because it was missing | |
| (x38) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0 |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nCertRotation_KubeSchedulerClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nCertRotation_KubeSchedulerClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-operator-controller namespace | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"etcd-pod-0\" not found" | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/clustercatalogs.olm.operatorframework.io because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/clusterextensions.olm.operatorframework.io because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod -n openshift-etcd because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ServiceCreated |
Created Service/api -n openshift-oauth-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-1 -n openshift-etcd because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-authentication namespace | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nCertRotation_KubeSchedulerClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nCertRotation_KubeSchedulerClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-leader-election-role -n openshift-operator-controller because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-7c6bdb986f-fcnqs_071f5b81-bbbc-4a14-9782-2688cc6628a6 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-1 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-all-bundles-1 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-1 -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler: cause by changes in data.pod.yaml | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-manager-role because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-operator-controller because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-oauth-apiserver\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing | |
openshift-apiserver |
default-scheduler |
apiserver-6578c4d554-6jl9n |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-6578c4d554-6jl9n to master-0 | |
openshift-apiserver |
replicaset-controller |
apiserver-6578c4d554 |
SuccessfulCreate |
Created pod: apiserver-6578c4d554-6jl9n | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/catalogd-manager-role -n openshift-config because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ServiceAccountCreated |
Created ServiceAccount/catalogd-controller-manager -n openshift-catalogd because it was missing | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6578c4d554 to 1 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.") | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " to "All is well",Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ServiceAccountCreated |
Created ServiceAccount/operator-controller-controller-manager -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-config because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/catalogd-leader-election-role -n openshift-catalogd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-6578c4d554-6jl9n |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-kube-scheduler |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.34/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nCertRotation_KubeSchedulerClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
| (x6) | openshift-route-controller-manager |
kubelet |
route-controller-manager-69bd477586-66ml6 |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
| (x6) | openshift-route-controller-manager |
kubelet |
route-controller-manager-69bd477586-66ml6 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-editor-role because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-viewer-role because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-metrics-reader because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/catalogd-leader-election-rolebinding -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-editor-role because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/catalogd-proxy-role because it was missing | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6bd884947c to 1 from 0 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-6578c4d554 to 0 from 1 | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller |
openshift-apiserver-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-apiserver because it changed | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." | |
openshift-apiserver |
replicaset-controller |
apiserver-6bd884947c |
SuccessfulCreate |
Created pod: apiserver-6bd884947c-tdlbn | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-viewer-role because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding -n openshift-config because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-apiserver |
default-scheduler |
apiserver-6bd884947c-tdlbn |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing | |
openshift-apiserver |
replicaset-controller |
apiserver-6578c4d554 |
SuccessfulDelete |
Deleted pod: apiserver-6578c4d554-6jl9n | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.serving-cert.secret | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.serving-cert.secret | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager |
replicaset-controller |
controller-manager-67d67c799d |
SuccessfulCreate |
Created pod: controller-manager-67d67c799d-b9bj6 | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-6965bd7478 to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-69bd477586 to 0 from 1 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-69bd477586 |
SuccessfulDelete |
Deleted pod: route-controller-manager-69bd477586-66ml6 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
| (x4) | openshift-apiserver |
kubelet |
apiserver-6578c4d554-6jl9n |
FailedMount |
MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver |
default-scheduler |
apiserver-6bd884947c-tdlbn |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-6bd884947c-tdlbn to master-0 | |
openshift-controller-manager |
default-scheduler |
controller-manager-67d67c799d-b9bj6 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6fcbb7f9bd |
SuccessfulDelete |
Deleted pod: controller-manager-6fcbb7f9bd-gdt9b | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6965bd7478 |
SuccessfulCreate |
Created pod: route-controller-manager-6965bd7478-x8mdg | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-6fcbb7f9bd to 0 from 1 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-67d67c799d to 1 from 0 | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding because it was missing | |
openshift-catalogd |
deployment-controller |
catalogd-controller-manager |
ScalingReplicaSet |
Scaled up replica set catalogd-controller-manager-67bc7c997f to 1 | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ConfigMapCreated |
Created ConfigMap/catalogd-trusted-ca-bundle -n openshift-catalogd because it was missing | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
FailedMount |
MountVolume.SetUp failed for volume "catalogserver-certs" : secret "catalogserver-cert" not found | |
openshift-cluster-olm-operator |
CatalogdDeploymentCatalogdControllerManager-catalogddeploymentcatalogdcontrollermanager-deployment-controller--catalogddeploymentcatalogdcontrollermanager |
cluster-olm-operator |
DeploymentCreated |
Created Deployment.apps/catalogd-controller-manager -n openshift-catalogd because it was missing | |
openshift-catalogd |
replicaset-controller |
catalogd-controller-manager-67bc7c997f |
SuccessfulCreate |
Created pod: catalogd-controller-manager-67bc7c997f-jdfsm | |
openshift-catalogd |
replicaset-controller |
catalogd-controller-manager-67bc7c997f |
SuccessfulCreate |
Created pod: catalogd-controller-manager-67bc7c997f-jdfsm | |
openshift-catalogd |
default-scheduler |
catalogd-controller-manager-67bc7c997f-jdfsm |
Scheduled |
Successfully assigned openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm to master-0 | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
FailedMount |
MountVolume.SetUp failed for volume "catalogserver-certs" : secret "catalogserver-cert" not found | |
openshift-catalogd |
default-scheduler |
catalogd-controller-manager-67bc7c997f-jdfsm |
Scheduled |
Successfully assigned openshift-catalogd/catalogd-controller-manager-67bc7c997f-jdfsm to master-0 | |
openshift-cluster-olm-operator |
OperatorcontrollerDeploymentOperatorControllerControllerManager-operatorcontrollerdeploymentoperatorcontrollercontrollermanager-deployment-controller--operatorcontrollerdeploymentoperatorcontrollercontrollermanager |
cluster-olm-operator |
DeploymentCreated |
Created Deployment.apps/operator-controller-controller-manager -n openshift-operator-controller because it was missing | |
openshift-cluster-version |
kubelet |
cluster-version-operator-76959b6567-v49tq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-proxy-rolebinding because it was missing | |
openshift-operator-controller |
deployment-controller |
operator-controller-controller-manager |
ScalingReplicaSet |
Scaled up replica set operator-controller-controller-manager-85c9b89969 to 1 | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-metrics-reader because it was missing | |
openshift-catalogd |
deployment-controller |
catalogd-controller-manager |
ScalingReplicaSet |
Scaled up replica set catalogd-controller-manager-67bc7c997f to 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-1 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
ServiceCreated |
Created Service/catalogd-service -n openshift-catalogd because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing changed from Unknown to True ("OperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("OperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment") | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "OperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes",Available message changed from "OperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationCreated |
Created MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/operator-controller-proxy-role because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
| (x2) | openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6965bd7478-x8mdg |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
AddedInterface |
Add eth0 [10.128.0.10/23] from ovn-kubernetes | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-dtwmd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861" | |
| (x9) | openshift-operator-controller |
replicaset-controller |
operator-controller-controller-manager-85c9b89969 |
FailedCreate |
Error creating: pods "operator-controller-controller-manager-85c9b89969-" is forbidden: unable to validate against any security context constraint: provider "privileged": Forbidden: not usable by user or serviceaccount |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ddgs9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041" | |
openshift-monitoring |
multus |
cluster-monitoring-operator-756d64c8c4-ddgs9 |
AddedInterface |
Add eth0 [10.128.0.25/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-lmqrr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09" | |
openshift-dns-operator |
multus |
dns-operator-86b8869b79-lmqrr |
AddedInterface |
Add eth0 [10.128.0.21/23] from ovn-kubernetes | |
openshift-multus |
multus |
network-metrics-daemon-bnllz |
AddedInterface |
Add eth0 [10.128.0.4/23] from ovn-kubernetes | |
openshift-controller-manager |
multus |
controller-manager-67d67c799d-b9bj6 |
AddedInterface |
Add eth0 [10.128.0.38/23] from ovn-kubernetes | |
openshift-multus |
multus |
multus-admission-controller-7c64d55f8-fzfsp |
AddedInterface |
Add eth0 [10.128.0.26/23] from ovn-kubernetes | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-leader-election-rolebinding -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-config because it was missing | |
openshift-operator-lifecycle-manager |
multus |
package-server-manager-5c696dbdcd-t7n5b |
AddedInterface |
Add eth0 [10.128.0.15/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" | |
openshift-controller-manager |
default-scheduler |
controller-manager-67d67c799d-b9bj6 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-67d67c799d-b9bj6 to master-0 | |
openshift-catalogd |
multus |
catalogd-controller-manager-67bc7c997f-jdfsm |
AddedInterface |
Add eth0 [10.128.0.36/23] from ovn-kubernetes | |
openshift-catalogd |
multus |
catalogd-controller-manager-67bc7c997f-jdfsm |
AddedInterface |
Add eth0 [10.128.0.36/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine | |
openshift-monitoring |
multus |
cluster-monitoring-operator-756d64c8c4-ddgs9 |
AddedInterface |
Add eth0 [10.128.0.25/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b" | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ddgs9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041" | |
openshift-image-registry |
multus |
cluster-image-registry-operator-96c8c64b8-dtwmd |
AddedInterface |
Add eth0 [10.128.0.18/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-wqxmh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a" | |
openshift-multus |
multus |
network-metrics-daemon-bnllz |
AddedInterface |
Add eth0 [10.128.0.4/23] from ovn-kubernetes | |
openshift-etcd |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.37/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-kjh2v |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" | |
openshift-operator-lifecycle-manager |
multus |
olm-operator-6b56bd877c-tk8xm |
AddedInterface |
Add eth0 [10.128.0.16/23] from ovn-kubernetes | |
openshift-ingress-operator |
multus |
ingress-operator-c588d8cb4-nclxg |
AddedInterface |
Add eth0 [10.128.0.8/23] from ovn-kubernetes | |
openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-nclxg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3" | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-tk8xm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" | |
openshift-multus |
multus |
multus-admission-controller-7c64d55f8-fzfsp |
AddedInterface |
Add eth0 [10.128.0.26/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" | |
openshift-apiserver |
multus |
apiserver-6bd884947c-tdlbn |
AddedInterface |
Add eth0 [10.128.0.35/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-6bd884947c-tdlbn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing | |
openshift-operator-lifecycle-manager |
multus |
catalog-operator-588944557d-kjh2v |
AddedInterface |
Add eth0 [10.128.0.17/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
marketplace-operator-6cc5b65c6b-wqxmh |
AddedInterface |
Add eth0 [10.128.0.14/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
AddedInterface |
Add eth0 [10.128.0.10/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" | |
openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b" | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-t7n5b |
Created |
Created container: kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-t7n5b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Created |
Created container: kube-rbac-proxy | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Started |
Started container kube-rbac-proxy | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-t7n5b |
Started |
Started container kube-rbac-proxy | |
openshift-controller-manager |
kubelet |
controller-manager-67d67c799d-b9bj6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-t7n5b |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-operator-controller because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" already present on machine | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Started |
Started container kube-rbac-proxy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" already present on machine | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Created |
Created container: kube-rbac-proxy | |
openshift-etcd |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Created |
Created container: manager | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding because it was missing | |
openshift-operator-controller |
replicaset-controller |
operator-controller-controller-manager-85c9b89969 |
SuccessfulCreate |
Created pod: operator-controller-controller-manager-85c9b89969-4n2ls | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Started |
Started container manager | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_InternalLoadBalancerServing_Degraded: configmaps \"loadbalancer-serving-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Created |
Created container: manager | |
openshift-catalogd |
catalogd-controller-manager-67bc7c997f-jdfsm_6a07878f-742b-4c0f-b49f-c96828c55bd0 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-67bc7c997f-jdfsm_6a07878f-742b-4c0f-b49f-c96828c55bd0 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available" | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Started |
Started container manager | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-operator-controller |
default-scheduler |
operator-controller-controller-manager-85c9b89969-4n2ls |
Scheduled |
Successfully assigned openshift-operator-controller/operator-controller-controller-manager-85c9b89969-4n2ls to master-0 | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-proxy-rolebinding because it was missing | |
openshift-catalogd |
catalogd-controller-manager-67bc7c997f-jdfsm_6a07878f-742b-4c0f-b49f-c96828c55bd0 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-67bc7c997f-jdfsm_6a07878f-742b-4c0f-b49f-c96828c55bd0 became leader | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ServiceCreated |
Created Service/operator-controller-controller-manager-metrics-service -n openshift-operator-controller because it was missing | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationUpdated |
Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed | |
openshift-cluster-olm-operator |
OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources |
cluster-olm-operator |
ConfigMapCreated |
Created ConfigMap/operator-controller-trusted-ca-bundle -n openshift-operator-controller because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-4n2ls |
FailedMount |
MountVolume.SetUp failed for volume "ca-certs" : configmap "operator-controller-trusted-ca-bundle" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6965bd7478-x8mdg |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-6965bd7478-x8mdg to master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-controller-manager: cause by changes in data.config.yaml | |
openshift-kube-scheduler |
kubelet |
installer-1-master-0 |
Killing |
Stopping container installer | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-865765995 to 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-controller-manager because it was missing | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-865765995 |
SuccessfulCreate |
Created pod: apiserver-865765995-c58rq | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-865765995-c58rq |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-865765995-c58rq to master-0 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: ",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-1 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ServiceCreated |
Created Service/oauth-openshift -n openshift-authentication because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager: cause by changes in data.config.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
| (x58) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
| (x45) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-version |
kubelet |
cluster-version-operator-76959b6567-v49tq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" in 11.746s (11.746s including waiting). Image size: 512819769 bytes. | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-1-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-wqxmh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a" in 13.927s (13.927s including waiting). Image size: 452956763 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver |
kubelet |
apiserver-6bd884947c-tdlbn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95" in 14.263s (14.263s including waiting). Image size: 584205881 bytes. | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-lmqrr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09" in 13.81s (13.81s including waiting). Image size: 463090242 bytes. | |
openshift-kube-apiserver |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-ff6c9b66-k8xp8_1c36ac29-1f75-4f2c-989c-a6ba92e249f1 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-ff6c9b66-k8xp8_1c36ac29-1f75-4f2c-989c-a6ba92e249f1 became leader | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ddgs9 |
Started |
Started container cluster-monitoring-operator | |
openshift-route-controller-manager |
multus |
route-controller-manager-6965bd7478-x8mdg |
AddedInterface |
Add eth0 [10.128.0.40/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ddgs9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041" in 14.788s (14.788s including waiting). Image size: 479280723 bytes. | |
openshift-controller-manager |
kubelet |
controller-manager-67d67c799d-b9bj6 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee" in 14.081s (14.081s including waiting). Image size: 553036394 bytes. | |
openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
Created |
Created container: network-metrics-daemon | |
openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b" in 15.028s (15.028s including waiting). Image size: 443654349 bytes. | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "OperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "OperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" in 14.767s (14.767s including waiting). Image size: 451401927 bytes. | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
Started |
Started container network-metrics-daemon | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-dtwmd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861" in 15.099s (15.099s including waiting). Image size: 543577525 bytes. | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-kjh2v |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" in 14.875s (14.875s including waiting). Image size: 857432360 bytes. | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-ff6c9b66-k8xp8_1c36ac29-1f75-4f2c-989c-a6ba92e249f1 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-ff6c9b66-k8xp8_1c36ac29-1f75-4f2c-989c-a6ba92e249f1 became leader | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" in 15.131s (15.131s including waiting). Image size: 672642165 bytes. | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" in 15.131s (15.131s including waiting). Image size: 672642165 bytes. | |
openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-scheduler |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.42/23] from ovn-kubernetes | |
openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-nclxg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ddgs9 |
Created |
Created container: cluster-monitoring-operator | |
openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
Started |
Started container network-metrics-daemon | |
openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
Created |
Created container: network-metrics-daemon | |
openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b" in 15.028s (15.028s including waiting). Image size: 443654349 bytes. | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-tk8xm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" in 14.829s (14.829s including waiting). Image size: 857432360 bytes. | |
openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-nclxg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3" in 14.724s (14.724s including waiting). Image size: 506056636 bytes. | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-96c8c64b8-dtwmd_602206d5-3313-457c-bf8a-a9d1fc9cc648 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" | |
openshift-oauth-apiserver |
multus |
apiserver-865765995-c58rq |
AddedInterface |
Add eth0 [10.128.0.41/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-wqxmh |
Created |
Created container: marketplace-operator | |
openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-wqxmh |
Started |
Started container marketplace-operator | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" in 14.767s (14.767s including waiting). Image size: 451401927 bytes. | |
openshift-operator-controller |
multus |
operator-controller-controller-manager-85c9b89969-4n2ls |
AddedInterface |
Add eth0 [10.128.0.39/23] from ovn-kubernetes | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-4n2ls |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b" already present on machine | |
openshift-kube-controller-manager |
multus |
installer-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.43/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-t7n5b |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" in 14.438s (14.438s including waiting). Image size: 857432360 bytes. | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_83da6673-d8e6-4dfe-8e2a-4c396600303b became leader | |
openshift-cluster-version |
kubelet |
cluster-version-operator-76959b6567-v49tq |
Started |
Started container cluster-version-operator | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ddgs9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041" in 14.788s (14.788s including waiting). Image size: 479280723 bytes. | |
openshift-cluster-version |
kubelet |
cluster-version-operator-76959b6567-v49tq |
Created |
Created container: cluster-version-operator | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ddgs9 |
Created |
Created container: cluster-monitoring-operator | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-756d64c8c4-ddgs9 |
Started |
Started container cluster-monitoring-operator | |
openshift-ingress-operator |
certificate_controller |
router-ca |
CreatedWildcardCACert |
Created a default wildcard CA certificate | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-cluster-node-tuning-operator |
default-scheduler |
tuned-2ffzt |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-2ffzt to master-0 | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-2ffzt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" already present on machine | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-2ffzt |
Created |
Created container: tuned | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-tk8xm |
Created |
Created container: olm-operator | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-6b56bd877c-tk8xm |
Started |
Started container olm-operator | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-2ffzt |
Started |
Started container tuned | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-4n2ls |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-4n2ls |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
deployment-controller |
prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-admission-webhook-695b766898 to 1 | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-admission-webhook-695b766898 |
SuccessfulCreate |
Created pod: prometheus-operator-admission-webhook-695b766898-nm8rs | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-4n2ls |
Started |
Started container kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
package-server-manager-5c696dbdcd-t7n5b_60269097-2ec7-4e51-82ec-75bf11f805fe |
packageserver-controller-lock |
LeaderElection |
package-server-manager-5c696dbdcd-t7n5b_60269097-2ec7-4e51-82ec-75bf11f805fe became leader | |
openshift-monitoring |
deployment-controller |
prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-admission-webhook-695b766898 to 1 | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-695b766898-nm8rs |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-4n2ls |
Started |
Started container manager | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-4n2ls |
Created |
Created container: manager | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-2ffzt | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringClientCertRequester is available | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-n8m2c" is created for OpenShiftMonitoringTelemeterClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-fg42f" is created for OpenShiftMonitoringClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-lmqrr |
Created |
Created container: dns-operator | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-master-0 |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-fg42f" has been approved | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-master-0 |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-n8m2c" has been approved | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-dns namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress namespace | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-lmqrr |
Started |
Started container dns-operator | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-lmqrr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
network-metrics-daemon-bnllz |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-2ffzt | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-2ffzt |
Started |
Started container tuned | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-lmqrr |
Created |
Created container: kube-rbac-proxy | |
openshift-dns-operator |
kubelet |
dns-operator-86b8869b79-lmqrr |
Started |
Started container kube-rbac-proxy | |
openshift-dns-operator |
cluster-dns-operator |
dns-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-wxhtx | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-1-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Killing |
Stopping container installer | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-2ffzt |
Created |
Created container: tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-2ffzt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" already present on machine | |
openshift-cluster-node-tuning-operator |
default-scheduler |
tuned-2ffzt |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-2ffzt to master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-fg42f" is created for OpenShiftMonitoringClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-n8m2c" is created for OpenShiftMonitoringTelemeterClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringClientCertRequester is available | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-operator-controller |
operator-controller-controller-manager-85c9b89969-4n2ls_14bd8d2b-465a-4e1d-8ae6-af93df4800f1 |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-85c9b89969-4n2ls_14bd8d2b-465a-4e1d-8ae6-af93df4800f1 became leader | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-kjh2v |
Created |
Created container: catalog-operator | |
openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-nclxg |
Started |
Started container kube-rbac-proxy | |
openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-nclxg |
Created |
Created container: kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-kjh2v |
Started |
Started container catalog-operator | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-695b766898-nm8rs |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-oauth-apiserver |
kubelet |
apiserver-865765995-c58rq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192" | |
openshift-ingress-operator |
ingress_controller |
default |
Admitted |
ingresscontroller passed validation | |
openshift-dns |
default-scheduler |
dns-default-wxhtx |
Scheduled |
Successfully assigned openshift-dns/dns-default-wxhtx to master-0 | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-admission-webhook-695b766898 |
SuccessfulCreate |
Created pod: prometheus-operator-admission-webhook-695b766898-nm8rs | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6965bd7478-x8mdg |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38" | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-67d67c799d-b9bj6 became leader | |
openshift-controller-manager |
kubelet |
controller-manager-67d67c799d-b9bj6 |
Started |
Started container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-67d67c799d-b9bj6 |
Created |
Created container: controller-manager | |
openshift-apiserver |
kubelet |
apiserver-6bd884947c-tdlbn |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-6bd884947c-tdlbn |
Created |
Created container: fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-6bd884947c-tdlbn |
Created |
Created container: openshift-apiserver-check-endpoints | |
openshift-config-managed |
certificate_publisher_controller |
default-ingress-cert |
PublishedRouterCA |
Published "default-ingress-cert" in "openshift-config-managed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-dns |
kubelet |
node-resolver-tzv2h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954" already present on machine | |
openshift-dns |
default-scheduler |
node-resolver-tzv2h |
Scheduled |
Successfully assigned openshift-dns/node-resolver-tzv2h to master-0 | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-tzv2h | |
openshift-ingress |
default-scheduler |
router-default-864ddd5f56-g8w2f |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-scheduler |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.47/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-7x72v |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-7x72v |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-7x72v |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-dns |
kubelet |
dns-default-wxhtx |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found | |
openshift-ingress-operator |
certificate_controller |
default |
CreatedDefaultCertificate |
Created default wildcard certificate "router-certs-default" | |
openshift-marketplace |
multus |
redhat-operators-7x72v |
AddedInterface |
Add eth0 [10.128.0.46/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap | |
| (x4) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed |
| (x5) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed |
openshift-apiserver |
kubelet |
apiserver-6bd884947c-tdlbn |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-marketplace |
default-scheduler |
redhat-operators-7x72v |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-7x72v to master-0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-b9c8fdfbc to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-67d67c799d to 0 from 1 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6965bd7478 |
SuccessfulDelete |
Deleted pod: route-controller-manager-6965bd7478-x8mdg | |
openshift-controller-manager |
replicaset-controller |
controller-manager-b9c8fdfbc |
SuccessfulCreate |
Created pod: controller-manager-b9c8fdfbc-rh9v2 | |
openshift-marketplace |
default-scheduler |
certified-operators-xqt6f |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-xqt6f to master-0 | |
openshift-controller-manager |
default-scheduler |
controller-manager-b9c8fdfbc-rh9v2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager |
replicaset-controller |
controller-manager-67d67c799d |
SuccessfulDelete |
Deleted pod: controller-manager-67d67c799d-b9bj6 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6978b88779 |
SuccessfulCreate |
Created pod: route-controller-manager-6978b88779-vp5tv | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-6965bd7478 to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-6978b88779 to 1 from 0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver |
kubelet |
apiserver-6bd884947c-tdlbn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-6bd884947c-tdlbn |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-6bd884947c-tdlbn |
Created |
Created container: openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-6bd884947c-tdlbn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95" already present on machine | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" | |
openshift-ingress |
deployment-controller |
router-default |
ScalingReplicaSet |
Scaled up replica set router-default-864ddd5f56 to 1 | |
openshift-ingress |
replicaset-controller |
router-default-864ddd5f56 |
SuccessfulCreate |
Created pod: router-default-864ddd5f56-g8w2f | |
openshift-config-managed |
certificate_publisher_controller |
router-certs |
PublishedRouterCertificates |
Published router certificates | |
openshift-dns |
kubelet |
dns-default-wxhtx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b" | |
openshift-marketplace |
multus |
certified-operators-xqt6f |
AddedInterface |
Add eth0 [10.128.0.48/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-xqt6f |
Started |
Started container extract-utilities | |
openshift-dns |
kubelet |
node-resolver-tzv2h |
Created |
Created container: dns-node-resolver | |
openshift-dns |
kubelet |
node-resolver-tzv2h |
Started |
Started container dns-node-resolver | |
| (x10) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
NoOperatorGroup |
csv in namespace with no operatorgroups |
openshift-marketplace |
kubelet |
certified-operators-xqt6f |
Created |
Created container: extract-utilities | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-marketplace |
kubelet |
certified-operators-xqt6f |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
redhat-operators-7x72v |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-dns |
multus |
dns-default-wxhtx |
AddedInterface |
Add eth0 [10.128.0.45/23] from ovn-kubernetes | |
openshift-marketplace |
default-scheduler |
community-operators-662mc |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-662mc to master-0 | |
openshift-apiserver |
kubelet |
apiserver-6bd884947c-tdlbn |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-apiserver |
kubelet |
apiserver-6bd884947c-tdlbn |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok livez check failed | |
openshift-marketplace |
kubelet |
certified-operators-xqt6f |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" architecture="amd64" | |
openshift-controller-manager |
kubelet |
controller-manager-67d67c799d-b9bj6 |
Killing |
Stopping container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-67d67c799d-b9bj6 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.38:8443/healthz": dial tcp 10.128.0.38:8443: connect: connection refused | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager |
kubelet |
controller-manager-67d67c799d-b9bj6 |
ProbeError |
Readiness probe error: Get "https://10.128.0.38:8443/healthz": dial tcp 10.128.0.38:8443: connect: connection refused body: | |
openshift-marketplace |
default-scheduler |
redhat-marketplace-sft6r |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-sft6r to master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager: cause by changes in data.pod.yaml | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:57741->172.30.0.10:53: read: connection refused" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:57741->172.30.0.10:53: read: connection refused" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveRouterSecret |
namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "names":[]interface {}{"*.apps.sno.openstack.lab"}}} | |
openshift-authentication-operator |
cluster-authentication-operator-routercertsdomainvalidationcontroller |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.sno.openstack.lab:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\n\u00a0\u00a0\t\t\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n\u00a0\u00a0\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+\u00a0\t\t\"namedCertificates\": []any{\n+\u00a0\t\t\tmap[string]any{\n+\u00a0\t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+\u00a0\t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+\u00a0\t\t\t\t\"names\": []any{string(\"*.apps.sno.openstack.lab\")},\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n" | |
openshift-marketplace |
multus |
redhat-marketplace-sft6r |
AddedInterface |
Add eth0 [10.128.0.50/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6965bd7478-x8mdg |
Started |
Started container route-controller-manager | |
openshift-marketplace |
kubelet |
community-operators-662mc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
multus |
community-operators-662mc |
AddedInterface |
Add eth0 [10.128.0.49/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-662mc |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-sft6r |
Started |
Started container extract-utilities | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-node namespace | |
openshift-dns |
kubelet |
dns-default-wxhtx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b" in 4.654s (4.654s including waiting). Image size: 479006001 bytes. | |
openshift-controller-manager |
default-scheduler |
controller-manager-b9c8fdfbc-rh9v2 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-b9c8fdfbc-rh9v2 to master-0 | |
openshift-oauth-apiserver |
kubelet |
apiserver-865765995-c58rq |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-865765995-c58rq |
Created |
Created container: fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-865765995-c58rq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192" in 6.088s (6.088s including waiting). Image size: 500175306 bytes. | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-6965bd7478-x8mdg_67984cca-b8c9-4433-936b-028a9fc58064 became leader | |
openshift-marketplace |
kubelet |
redhat-marketplace-sft6r |
Created |
Created container: extract-utilities | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
redhat-marketplace-sft6r |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
| (x2) | openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6978b88779-vp5tv |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6965bd7478-x8mdg |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38" in 6.097s (6.097s including waiting). Image size: 481921522 bytes. | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6965bd7478-x8mdg |
Created |
Created container: route-controller-manager | |
openshift-marketplace |
kubelet |
community-operators-662mc |
Created |
Created container: extract-utilities | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-b9c8fdfbc-rh9v2 became leader | |
openshift-dns |
kubelet |
dns-default-wxhtx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6965bd7478-x8mdg |
Killing |
Stopping container route-controller-manager | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6965bd7478-x8mdg |
ProbeError |
Readiness probe error: Get "https://10.128.0.40:8443/healthz": dial tcp 10.128.0.40:8443: connect: connection refused body: | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6965bd7478-x8mdg |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.40:8443/healthz": dial tcp 10.128.0.40:8443: connect: connection refused | |
openshift-oauth-apiserver |
kubelet |
apiserver-865765995-c58rq |
Created |
Created container: oauth-apiserver | |
openshift-marketplace |
kubelet |
redhat-marketplace-sft6r |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-oauth-apiserver |
kubelet |
apiserver-865765995-c58rq |
Started |
Started container oauth-apiserver | |
openshift-marketplace |
kubelet |
community-operators-662mc |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-dns |
kubelet |
dns-default-wxhtx |
Started |
Started container kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-wxhtx |
Created |
Created container: kube-rbac-proxy | |
openshift-oauth-apiserver |
kubelet |
apiserver-865765995-c58rq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192" already present on machine | |
openshift-dns |
kubelet |
dns-default-wxhtx |
Started |
Started container dns | |
openshift-dns |
kubelet |
dns-default-wxhtx |
Created |
Created container: dns | |
openshift-controller-manager |
multus |
controller-manager-b9c8fdfbc-rh9v2 |
AddedInterface |
Add eth0 [10.128.0.51/23] from ovn-kubernetes | |
openshift-authentication-operator |
cluster-authentication-operator-trust-distribution-trustdistributioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.image.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.build.openshift.io because it was missing | ||
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.apps.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
Created <unknown>/v1.oauth.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.authorization.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
Created <unknown>/v1.user.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
OpenShiftAPICheckFailed |
"oauth.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.quota.openshift.io because it was missing | ||
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.18.32"}] to [{"operator" "4.18.32"} {"openshift-apiserver" "4.18.32"}] | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.route.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.18.32" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.project.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
OpenShiftAPICheckFailed |
"user.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.template.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.security.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"image.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/kube-controller-manager-pod has changed" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6978b88779-vp5tv |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-6978b88779-vp5tv to master-0 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/template.openshift.io/v1: 401" | |
openshift-route-controller-manager |
multus |
route-controller-manager-6978b88779-vp5tv |
AddedInterface |
Add eth0 [10.128.0.52/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-1-master-0 |
Killing |
Stopping container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing | |
| (x23) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerStuck |
unexpected addresses: 192.168.32.10 |
openshift-cluster-version |
kubelet |
cluster-version-operator-76959b6567-v49tq |
Killing |
Stopping container cluster-version-operator | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-76959b6567 |
SuccessfulDelete |
Deleted pod: cluster-version-operator-76959b6567-v49tq | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6978b88779-vp5tv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38" already present on machine | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled down replica set cluster-version-operator-76959b6567 to 0 from 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd |
kubelet |
etcd-master-0-master-0 |
Killing |
Stopping container etcdctl | |
openshift-etcd |
static-pod-installer |
installer-1-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 1 | |
openshift-marketplace |
default-scheduler |
certified-operators-2lg56 |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-2lg56 to master-0 | |
openshift-marketplace |
default-scheduler |
community-operators-t8vtc |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-t8vtc to master-0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6978b88779-vp5tv |
Created |
Created container: route-controller-manager | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-marketplace |
kubelet |
certified-operators-xqt6f |
Created |
Created container: extract-content | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container setup | |
openshift-marketplace |
kubelet |
community-operators-662mc |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-7x72v |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-sft6r |
Created |
Created container: extract-content | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-marketplace |
kubelet |
redhat-marketplace-sft6r |
Started |
Started container extract-content | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: setup | |
openshift-marketplace |
kubelet |
community-operators-662mc |
Created |
Created container: extract-content | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6978b88779-vp5tv |
Started |
Started container route-controller-manager | |
openshift-marketplace |
kubelet |
certified-operators-xqt6f |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-7x72v |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-7x72v |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" | |
openshift-marketplace |
kubelet |
redhat-marketplace-sft6r |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" | |
openshift-marketplace |
kubelet |
redhat-marketplace-sft6r |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-7x72v |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 9.151s (9.151s including waiting). Image size: 913084961 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-sft6r |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 9.179s (9.179s including waiting). Image size: 913084961 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-sft6r |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-7x72v |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-7x72v |
Started |
Started container registry-server | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-marketplace |
kubelet |
redhat-operators-7x72v |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6978b88779-vp5tv |
ProbeError |
Readiness probe error: Get "https://10.128.0.52:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6978b88779-vp5tv |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.52:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6978b88779-vp5tv |
ProbeError |
Liveness probe error: Get "https://10.128.0.52:8443/healthz": dial tcp 10.128.0.52:8443: connect: connection refused body: |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6978b88779-vp5tv |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.52:8443/healthz": dial tcp 10.128.0.52:8443: connect: connection refused |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6978b88779-vp5tv |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.52:8443/healthz": dial tcp 10.128.0.52:8443: connect: connection refused |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d5655115-c223-42ed-a93d-9d609e55c901_0(9033bd2a10a5aa2000f2e305ee22c191997b00eae1490177ec947aa0a1252cff): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9033bd2a10a5aa2000f2e305ee22c191997b00eae1490177ec947aa0a1252cff" Netns:"/var/run/netns/2766612a-a335-4bdc-94a4-bd48079be634" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=9033bd2a10a5aa2000f2e305ee22c191997b00eae1490177ec947aa0a1252cff;K8S_POD_UID=d5655115-c223-42ed-a93d-9d609e55c901" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d5655115-c223-42ed-a93d-9d609e55c901]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x4) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6978b88779-vp5tv |
ProbeError |
Readiness probe error: Get "https://10.128.0.52:8443/healthz": dial tcp 10.128.0.52:8443: connect: connection refused body: |
openshift-network-node-identity |
kubelet |
network-node-identity-xwftw |
Started |
Started container approver | |
openshift-network-node-identity |
kubelet |
network-node-identity-xwftw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-xwftw |
Created |
Created container: approver | |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-wqxmh |
ProbeError |
Liveness probe error: Get "http://10.128.0.14:8080/healthz": dial tcp 10.128.0.14:8080: connect: connection refused body: |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-wqxmh |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.14:8080/healthz": dial tcp 10.128.0.14:8080: connect: connection refused |
| (x5) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-wqxmh |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.14:8080/healthz": dial tcp 10.128.0.14:8080: connect: connection refused |
| (x5) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-wqxmh |
ProbeError |
Readiness probe error: Get "http://10.128.0.14:8080/healthz": dial tcp 10.128.0.14:8080: connect: connection refused body: |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_d5655115-c223-42ed-a93d-9d609e55c901_0(d0a1d11a0a2d2c2561d3d10071017aa5fc4d3755b5c0966e48c3e368098ee733): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d0a1d11a0a2d2c2561d3d10071017aa5fc4d3755b5c0966e48c3e368098ee733" Netns:"/var/run/netns/3d0983f6-5926-494a-b7e7-8e345122a0c6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=d0a1d11a0a2d2c2561d3d10071017aa5fc4d3755b5c0966e48c3e368098ee733;K8S_POD_UID=d5655115-c223-42ed-a93d-9d609e55c901" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/d5655115-c223-42ed-a93d-9d609e55c901]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-rj245 |
Started |
Started container ovnkube-cluster-manager | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-rj245 |
Created |
Created container: ovnkube-cluster-manager | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-rj245 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
| (x6) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-4n2ls |
ProbeError |
Readiness probe error: Get "http://10.128.0.39:8081/readyz": dial tcp 10.128.0.39:8081: connect: connection refused body: |
| (x3) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-4n2ls |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.39:8081/healthz": dial tcp 10.128.0.39:8081: connect: connection refused |
| (x3) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-4n2ls |
ProbeError |
Liveness probe error: Get "http://10.128.0.39:8081/healthz": dial tcp 10.128.0.39:8081: connect: connection refused body: |
| (x6) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-4n2ls |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.39:8081/readyz": dial tcp 10.128.0.39:8081: connect: connection refused |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-b9c8fdfbc-rh9v2 |
ProbeError |
Liveness probe error: Get "https://10.128.0.51:8443/healthz": dial tcp 10.128.0.51:8443: connect: connection refused body: |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-b9c8fdfbc-rh9v2 |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.51:8443/healthz": dial tcp 10.128.0.51:8443: connect: connection refused |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-rev | |
| (x2) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-jrdqm |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.24:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x2) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-jrdqm |
ProbeError |
Liveness probe error: Get "https://10.128.0.24:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.36:8081/healthz": dial tcp 10.128.0.36:8081: connect: connection refused |
| (x6) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.36:8081/readyz": dial tcp 10.128.0.36:8081: connect: connection refused |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
ProbeError |
Liveness probe error: Get "http://10.128.0.36:8081/healthz": dial tcp 10.128.0.36:8081: connect: connection refused body: |
| (x6) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.36:8081/readyz": dial tcp 10.128.0.36:8081: connect: connection refused |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.36:8081/healthz": dial tcp 10.128.0.36:8081: connect: connection refused |
| (x3) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
ProbeError |
Liveness probe error: Get "http://10.128.0.36:8081/healthz": dial tcp 10.128.0.36:8081: connect: connection refused body: |
| (x7) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
ProbeError |
Readiness probe error: Get "http://10.128.0.36:8081/readyz": dial tcp 10.128.0.36:8081: connect: connection refused body: |
| (x7) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
ProbeError |
Readiness probe error: Get "http://10.128.0.36:8081/readyz": dial tcp 10.128.0.36:8081: connect: connection refused body: |
| (x5) | openshift-controller-manager |
kubelet |
controller-manager-b9c8fdfbc-rh9v2 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.51:8443/healthz": dial tcp 10.128.0.51:8443: connect: connection refused |
| (x5) | openshift-controller-manager |
kubelet |
controller-manager-b9c8fdfbc-rh9v2 |
ProbeError |
Readiness probe error: Get "https://10.128.0.51:8443/healthz": dial tcp 10.128.0.51:8443: connect: connection refused body: |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-b9c8fdfbc-rh9v2 |
Started |
Started container controller-manager |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-b9c8fdfbc-rh9v2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee" already present on machine |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-b9c8fdfbc-rh9v2 |
Created |
Created container: controller-manager |
| (x4) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-jrdqm |
Created |
Created container: authentication-operator |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
| (x4) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager |
| (x2) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-jrdqm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963" already present on machine |
openshift-kube-controller-manager |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-jrdqm |
Started |
Started container authentication-operator |
| (x4) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager |
kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
BackOff |
Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3) | |
| (x3) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Killing |
Container kube-controller-manager failed startup probe, will be restarted |
| (x9) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://192.168.32.10:10257/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-pjm6n |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.12:8443/healthz": dial tcp 10.128.0.12:8443: connect: connection refused |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-pjm6n |
ProbeError |
Liveness probe error: Get "https://10.128.0.12:8443/healthz": dial tcp 10.128.0.12:8443: connect: connection refused body: |
| (x6) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-jrdqm |
ProbeError |
Liveness probe error: Get "https://10.128.0.24:8443/healthz": dial tcp 10.128.0.24:8443: connect: connection refused body: |
| (x6) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-jrdqm |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.24:8443/healthz": dial tcp 10.128.0.24:8443: connect: connection refused |
| (x7) | openshift-marketplace |
kubelet |
certified-operators-2lg56 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-7gwpz" : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded |
| (x7) | openshift-marketplace |
kubelet |
community-operators-t8vtc |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-zr2dv" : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-9fpgj |
Created |
Created container: csi-snapshot-controller-operator |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-9fpgj |
Started |
Started container csi-snapshot-controller-operator |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-9fpgj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1" already present on machine |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
| (x2) | openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-bfm5s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e" already present on machine |
| (x2) | openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-bfm5s |
Created |
Created container: service-ca-controller |
| (x2) | openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-bfm5s |
Started |
Started container service-ca-controller |
| (x3) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-q4766 |
BackOff |
Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-74b6595c6d-q4766_openshift-cluster-storage-operator(129dba1e-73df-4ea4-96c0-3eba78d568ba) |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-mzk89 |
BackOff |
Back-off restarting failed container cluster-olm-operator in pod cluster-olm-operator-55b69c6c48-mzk89_openshift-cluster-olm-operator(6c734c89-515e-4ff0-82d1-831ddaf0b99e) | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-wcpf8 |
BackOff |
Back-off restarting failed container kube-scheduler-operator-container in pod openshift-kube-scheduler-operator-7485d55966-wcpf8_openshift-kube-scheduler-operator(2b167b7b-2280-4c82-ac78-71c57aebe503) | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-5f5g9 |
BackOff |
Back-off restarting failed container openshift-apiserver-operator in pod openshift-apiserver-operator-6d4655d9cf-5f5g9_openshift-apiserver-operator(af61bda0-c7b4-489d-a671-eaa5299942fe) | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-l24cg |
BackOff |
Back-off restarting failed container network-operator in pod network-operator-6fcf4c966-l24cg_openshift-network-operator(4fd2c79d-1e10-4f09-8a33-c66598abc99a) | |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-fcnqs |
BackOff |
Back-off restarting failed container openshift-config-operator in pod openshift-config-operator-7c6bdb986f-fcnqs_openshift-config-operator(61d90bf3-02df-48c8-b2ec-09a1653b0800) |
openshift-marketplace |
kubelet |
redhat-marketplace-sft6r |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-7x72v |
Killing |
Stopping container registry-server | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-dtwmd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861" already present on machine | |
| (x2) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-t7n5b |
Started |
Started container package-server-manager |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-bb7ffbb8d-rj245 became leader | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_6805b4b3-e7e1-48cb-a7d6-a4ff3715a212 became leader | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-ff6c9b66-k8xp8_20655b96-a647-4267-aff9-e14d454e3a33 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-ff6c9b66-k8xp8_20655b96-a647-4267-aff9-e14d454e3a33 became leader | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" already present on machine | |
| (x2) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-t7n5b |
Created |
Created container: package-server-manager |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-ff6c9b66-k8xp8_20655b96-a647-4267-aff9-e14d454e3a33 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-ff6c9b66-k8xp8_20655b96-a647-4267-aff9-e14d454e3a33 became leader | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" already present on machine | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-6978b88779-vp5tv_6de57360-0f8f-4daa-a944-ec19dfe53d10 became leader | |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
Created |
Created container: cluster-node-tuning-operator |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-t7n5b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
| (x2) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-dtwmd |
Started |
Started container cluster-image-registry-operator |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-96c8c64b8-dtwmd_ada902ed-6507-4ec9-8f7c-2273f1f47617 became leader | |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
Created |
Created container: cluster-node-tuning-operator |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
Started |
Started container cluster-node-tuning-operator |
| (x2) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-dtwmd |
Created |
Created container: cluster-image-registry-operator |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
Started |
Started container cluster-node-tuning-operator |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-operator-lifecycle-manager |
package-server-manager-5c696dbdcd-t7n5b_89ef9d5d-9c2a-45bd-8192-30be90f5459e |
packageserver-controller-lock |
LeaderElection |
package-server-manager-5c696dbdcd-t7n5b_89ef9d5d-9c2a-45bd-8192-30be90f5459e became leader | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-b9c8fdfbc-rh9v2 became leader | |
openshift-marketplace |
kubelet |
certified-operators-2lg56 |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-2lg56 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-t8vtc |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-t8vtc |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-t8vtc |
Created |
Created container: extract-utilities | |
openshift-marketplace |
multus |
certified-operators-2lg56 |
AddedInterface |
Add eth0 [10.128.0.55/23] from ovn-kubernetes | |
| (x2) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-mzk89 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1" already present on machine |
openshift-marketplace |
kubelet |
community-operators-t8vtc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
multus |
community-operators-t8vtc |
AddedInterface |
Add eth0 [10.128.0.54/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-2lg56 |
Started |
Started container extract-utilities | |
| (x3) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-mzk89 |
Started |
Started container cluster-olm-operator |
| (x3) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-mzk89 |
Created |
Created container: cluster-olm-operator |
openshift-marketplace |
kubelet |
certified-operators-2lg56 |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-2lg56 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 588ms (588ms including waiting). Image size: 1234637517 bytes. | |
openshift-marketplace |
kubelet |
community-operators-t8vtc |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-2lg56 |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-2lg56 |
Started |
Started container extract-content | |
| (x3) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-q4766 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c" already present on machine |
| (x4) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-q4766 |
Started |
Started container snapshot-controller |
| (x4) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-q4766 |
Created |
Created container: snapshot-controller |
openshift-marketplace |
kubelet |
community-operators-t8vtc |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 597ms (597ms including waiting). Image size: 1213306565 bytes. | |
openshift-marketplace |
kubelet |
community-operators-t8vtc |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
certified-operators-2lg56 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" | |
openshift-marketplace |
kubelet |
community-operators-t8vtc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" | |
openshift-marketplace |
kubelet |
certified-operators-2lg56 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
certified-operators-2lg56 |
Started |
Started container registry-server | |
| (x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-p5mdv |
BackOff |
Back-off restarting failed container kube-apiserver-operator in pod kube-apiserver-operator-54984b6678-p5mdv_openshift-kube-apiserver-operator(e259b5a1-837b-4cde-85f7-cd5781af08bd) |
openshift-marketplace |
kubelet |
certified-operators-2lg56 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 389ms (389ms including waiting). Image size: 913084961 bytes. | |
openshift-marketplace |
kubelet |
community-operators-t8vtc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 413ms (413ms including waiting). Image size: 913084961 bytes. | |
openshift-marketplace |
kubelet |
community-operators-t8vtc |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-t8vtc |
Created |
Created container: registry-server | |
| (x3) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-wcpf8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine |
| (x3) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-wcpf8 |
Started |
Started container kube-scheduler-operator-container |
| (x3) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-wcpf8 |
Created |
Created container: kube-scheduler-operator-container |
| (x2) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-dsfkk |
BackOff |
Back-off restarting failed container openshift-controller-manager-operator in pod openshift-controller-manager-operator-5f5f84757d-dsfkk_openshift-controller-manager-operator(c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda) |
| (x2) | openshift-network-operator |
kubelet |
network-operator-6fcf4c966-l24cg |
Started |
Started container network-operator |
| (x2) | openshift-network-operator |
kubelet |
network-operator-6fcf4c966-l24cg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine |
| (x2) | openshift-network-operator |
kubelet |
network-operator-6fcf4c966-l24cg |
Created |
Created container: network-operator |
| (x3) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-5f5g9 |
Started |
Started container openshift-apiserver-operator |
| (x3) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-5f5g9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5" already present on machine |
| (x3) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-5f5g9 |
Created |
Created container: openshift-apiserver-operator |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-tckph |
BackOff |
Back-off restarting failed container kube-storage-version-migrator-operator in pod kube-storage-version-migrator-operator-cd5474998-tckph_openshift-kube-storage-version-migrator-operator(0c58265d-32fb-4cf0-97d8-6c9a5d37fad9) |
| (x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-xvzq9 |
BackOff |
Back-off restarting failed container kube-controller-manager-operator in pod kube-controller-manager-operator-78ff47c7c5-xvzq9_openshift-kube-controller-manager-operator(553d4535-9985-47e2-83ee-8fcfb6035e7b) |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-sg75p |
BackOff |
Back-off restarting failed container service-ca-operator in pod service-ca-operator-5dc4688546-sg75p_openshift-service-ca-operator(65d9f008-7777-48fe-85fe-9d54a7bbcea9) |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-fcnqs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b" already present on machine |
| (x3) | openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-fcnqs |
Created |
Created container: openshift-config-operator |
| (x3) | openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-fcnqs |
Started |
Started container openshift-config-operator |
| (x3) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-pjm6n |
BackOff |
Back-off restarting failed container etcd-operator in pod etcd-operator-67bf55ccdd-pjm6n_openshift-etcd-operator(f2546ffc-8d0a-4010-a3bd-9e69b6dbea40) |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-74b6595c6d-q4766 |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-74b6595c6d-q4766 became leader | |
openshift-marketplace |
kubelet |
redhat-operators-wzsv7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_d2988573-1e52-4482-8a8d-359f312d0478 became leader | |
openshift-marketplace |
multus |
redhat-operators-wzsv7 |
AddedInterface |
Add eth0 [10.128.0.57/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dzgz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
multus |
redhat-marketplace-7dzgz |
AddedInterface |
Add eth0 [10.128.0.56/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dzgz |
Started |
Started container extract-utilities | |
| (x3) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-p5mdv |
Started |
Started container kube-apiserver-operator |
| (x3) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-p5mdv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine |
| (x3) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-p5mdv |
Created |
Created container: kube-apiserver-operator |
openshift-marketplace |
kubelet |
redhat-operators-wzsv7 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dzgz |
Created |
Created container: extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-wzsv7 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dzgz |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-wzsv7 |
Created |
Created container: extract-utilities | |
| (x4) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-tckph |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144" already present on machine |
| (x4) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-tckph |
Created |
Created container: kube-storage-version-migrator-operator |
| (x4) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-tckph |
Started |
Started container kube-storage-version-migrator-operator |
openshift-marketplace |
kubelet |
redhat-marketplace-7dzgz |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 1.635s (1.635s including waiting). Image size: 1201887930 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dzgz |
Created |
Created container: extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-wzsv7 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 1.729s (1.73s including waiting). Image size: 1701476551 bytes. | |
| (x4) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-dsfkk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88" already present on machine |
openshift-marketplace |
kubelet |
redhat-operators-wzsv7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dzgz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dzgz |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-wzsv7 |
Started |
Started container extract-content | |
| (x4) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-dsfkk |
Created |
Created container: openshift-controller-manager-operator |
openshift-marketplace |
kubelet |
redhat-operators-wzsv7 |
Created |
Created container: extract-content | |
| (x4) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-dsfkk |
Started |
Started container openshift-controller-manager-operator |
| (x4) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-sg75p |
Started |
Started container service-ca-operator |
| (x4) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-sg75p |
Created |
Created container: service-ca-operator |
| (x4) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-sg75p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e" already present on machine |
openshift-marketplace |
kubelet |
redhat-marketplace-7dzgz |
Started |
Started container registry-server | |
| (x4) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-xvzq9 |
Started |
Started container kube-controller-manager-operator |
| (x4) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-xvzq9 |
Created |
Created container: kube-controller-manager-operator |
| (x4) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-xvzq9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_e85abd82-b772-48b5-9442-ed1eb793fc58 became leader | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dzgz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 727ms (727ms including waiting). Image size: 913084961 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-7dzgz |
Created |
Created container: registry-server | |
openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-marketplace |
kubelet |
redhat-operators-wzsv7 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-wzsv7 |
Created |
Created container: registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-wzsv7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 759ms (759ms including waiting). Image size: 913084961 bytes. | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-649c4f5445 |
SuccessfulCreate |
Created pod: cluster-version-operator-649c4f5445-7kdb7 | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-649c4f5445 to 1 | |
openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-7kdb7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" already present on machine | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_5b8e987d-d181-4fa9-a704-c1e40ec18c5e became leader | |
openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-7kdb7 |
Started |
Started container cluster-version-operator | |
openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-7kdb7 |
Created |
Created container: cluster-version-operator | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" | |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-pjm6n |
Started |
Started container etcd-operator |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-pjm6n |
Created |
Created container: etcd-operator |
| (x4) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-pjm6n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" architecture="amd64" | |
openshift-marketplace |
kubelet |
redhat-operators-wzsv7 |
Unhealthy |
Startup probe failed: timeout: failed to connect service ":50051" within 1s | |
openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-d8bf84b88 |
SuccessfulCreate |
Created pod: control-plane-machine-set-operator-d8bf84b88-hmpc7 | |
openshift-machine-api |
deployment-controller |
control-plane-machine-set-operator |
ScalingReplicaSet |
Scaled up replica set control-plane-machine-set-operator-d8bf84b88 to 1 | |
openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-d8bf84b88 |
SuccessfulCreate |
Created pod: control-plane-machine-set-operator-d8bf84b88-hmpc7 | |
openshift-machine-api |
deployment-controller |
control-plane-machine-set-operator |
ScalingReplicaSet |
Scaled up replica set control-plane-machine-set-operator-d8bf84b88 to 1 | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" | |
openshift-machine-api |
multus |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
AddedInterface |
Add eth0 [10.128.0.58/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
AddedInterface |
Add eth0 [10.128.0.58/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
Started |
Started container control-plane-machine-set-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
Started |
Started container control-plane-machine-set-operator | |
openshift-machine-api |
control-plane-machine-set-operator-d8bf84b88-hmpc7_4276326f-e9fa-4f96-a306-88fa9fedceef |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-d8bf84b88-hmpc7_4276326f-e9fa-4f96-a306-88fa9fedceef became leader | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" in 1.844s (1.844s including waiting). Image size: 465507019 bytes. | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
Created |
Created container: control-plane-machine-set-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
Created |
Created container: control-plane-machine-set-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" in 1.844s (1.844s including waiting). Image size: 465507019 bytes. | |
openshift-machine-api |
control-plane-machine-set-operator-d8bf84b88-hmpc7_4276326f-e9fa-4f96-a306-88fa9fedceef |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-d8bf84b88-hmpc7_4276326f-e9fa-4f96-a306-88fa9fedceef became leader | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-6c46d95f74 |
SuccessfulCreate |
Created pod: machine-approver-6c46d95f74-nsmfx | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-6c46d95f74 to 1 | |
openshift-cloud-credential-operator |
deployment-controller |
cloud-credential-operator |
ScalingReplicaSet |
Scaled up replica set cloud-credential-operator-595c8f9ff to 1 | |
openshift-cloud-credential-operator |
replicaset-controller |
cloud-credential-operator-595c8f9ff |
SuccessfulCreate |
Created pod: cloud-credential-operator-595c8f9ff-p8hbc | |
openshift-cluster-samples-operator |
replicaset-controller |
cluster-samples-operator-f8cbff74c |
SuccessfulCreate |
Created pod: cluster-samples-operator-f8cbff74c-hr9g4 | |
openshift-cluster-samples-operator |
deployment-controller |
cluster-samples-operator |
ScalingReplicaSet |
Scaled up replica set cluster-samples-operator-f8cbff74c to 1 | |
openshift-machine-api |
deployment-controller |
cluster-baremetal-operator |
ScalingReplicaSet |
Scaled up replica set cluster-baremetal-operator-7bc947fc7d to 1 | |
openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-7bc947fc7d |
SuccessfulCreate |
Created pod: cluster-baremetal-operator-7bc947fc7d-8qkdw | |
openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-7bc947fc7d |
SuccessfulCreate |
Created pod: cluster-baremetal-operator-7bc947fc7d-8qkdw | |
openshift-machine-api |
deployment-controller |
cluster-baremetal-operator |
ScalingReplicaSet |
Scaled up replica set cluster-baremetal-operator-7bc947fc7d to 1 | |
openshift-machine-api |
replicaset-controller |
cluster-autoscaler-operator-67fd9768b5 |
SuccessfulCreate |
Created pod: cluster-autoscaler-operator-67fd9768b5-6dzpr | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49" | |
openshift-machine-api |
deployment-controller |
cluster-autoscaler-operator |
ScalingReplicaSet |
Scaled up replica set cluster-autoscaler-operator-67fd9768b5 to 1 | |
openshift-machine-api |
replicaset-controller |
cluster-autoscaler-operator-67fd9768b5 |
SuccessfulCreate |
Created pod: cluster-autoscaler-operator-67fd9768b5-6dzpr | |
openshift-machine-api |
deployment-controller |
cluster-autoscaler-operator |
ScalingReplicaSet |
Scaled up replica set cluster-autoscaler-operator-67fd9768b5 to 1 | |
openshift-machine-api |
multus |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
AddedInterface |
Add eth0 [10.128.0.61/23] from ovn-kubernetes | |
openshift-machine-api |
multus |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
AddedInterface |
Add eth0 [10.128.0.61/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49" | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-operator-84976bb859 |
SuccessfulCreate |
Created pod: machine-config-operator-84976bb859-kmc95 | |
openshift-cluster-storage-operator |
replicaset-controller |
cluster-storage-operator-75b869db96 |
SuccessfulCreate |
Created pod: cluster-storage-operator-75b869db96-qbmw5 | |
openshift-insights |
replicaset-controller |
insights-operator-cb4f7b4cf |
SuccessfulCreate |
Created pod: insights-operator-cb4f7b4cf-cmbjq | |
openshift-cluster-storage-operator |
deployment-controller |
cluster-storage-operator |
ScalingReplicaSet |
Scaled up replica set cluster-storage-operator-75b869db96 to 1 | |
openshift-insights |
deployment-controller |
insights-operator |
ScalingReplicaSet |
Scaled up replica set insights-operator-cb4f7b4cf to 1 | |
openshift-machine-config-operator |
deployment-controller |
machine-config-operator |
ScalingReplicaSet |
Scaled up replica set machine-config-operator-84976bb859 to 1 | |
| (x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
RequirementsUnknown |
InstallModes now support target namespaces |
openshift-cluster-storage-operator |
multus |
cluster-storage-operator-75b869db96-qbmw5 |
AddedInterface |
Add eth0 [10.128.0.64/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-qbmw5 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f" | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Started |
Started container cluster-baremetal-operator | |
openshift-machine-config-operator |
multus |
machine-config-operator-84976bb859-kmc95 |
AddedInterface |
Add eth0 [10.128.0.63/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
replicaset-controller |
packageserver-67d4dbd88b |
SuccessfulCreate |
Created pod: packageserver-67d4dbd88b-szr25 | |
openshift-machine-api |
replicaset-controller |
machine-api-operator-bd7dd5c46 |
SuccessfulCreate |
Created pod: machine-api-operator-bd7dd5c46-g6fgz | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-kmc95 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Started |
Started container baremetal-kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Created |
Created container: baremetal-kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Created |
Created container: cluster-baremetal-operator | |
openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-cmbjq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30" | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-insights |
multus |
insights-operator-cb4f7b4cf-cmbjq |
AddedInterface |
Add eth0 [10.128.0.65/23] from ovn-kubernetes | |
openshift-machine-api |
replicaset-controller |
machine-api-operator-bd7dd5c46 |
SuccessfulCreate |
Created pod: machine-api-operator-bd7dd5c46-g6fgz | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49" in 2.027s (2.027s including waiting). Image size: 465648392 bytes. | |
openshift-machine-api |
deployment-controller |
machine-api-operator |
ScalingReplicaSet |
Scaled up replica set machine-api-operator-bd7dd5c46 to 1 | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49" in 2.027s (2.027s including waiting). Image size: 465648392 bytes. | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Started |
Started container cluster-baremetal-operator | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-5b487c8bfc to 1 | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-operator-lifecycle-manager |
deployment-controller |
packageserver |
ScalingReplicaSet |
Scaled up replica set packageserver-67d4dbd88b to 1 | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-5b487c8bfc |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd | |
openshift-machine-api |
deployment-controller |
machine-api-operator |
ScalingReplicaSet |
Scaled up replica set machine-api-operator-bd7dd5c46 to 1 | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Created |
Created container: baremetal-kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Created |
Created container: cluster-baremetal-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-kmc95 |
Created |
Created container: machine-config-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-kmc95 |
Started |
Started container machine-config-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-kmc95 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-kmc95 |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Started |
Started container baremetal-kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-kmc95 |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-machine-api |
cluster-baremetal-operator-7bc947fc7d-8qkdw_3aedd3e3-a9b4-43b1-83a5-a5e11e17456e |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-7bc947fc7d-8qkdw_3aedd3e3-a9b4-43b1-83a5-a5e11e17456e became leader | |
openshift-operator-lifecycle-manager |
multus |
packageserver-67d4dbd88b-szr25 |
AddedInterface |
Add eth0 [10.128.0.66/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-67d4dbd88b-szr25 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/master-user-data-managed -n openshift-machine-api because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-67d4dbd88b-szr25 |
Started |
Started container packageserver | |
openshift-machine-api |
cluster-baremetal-operator-7bc947fc7d-8qkdw_3aedd3e3-a9b4-43b1-83a5-a5e11e17456e |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-7bc947fc7d-8qkdw_3aedd3e3-a9b4-43b1-83a5-a5e11e17456e became leader | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-67d4dbd88b-szr25 |
Created |
Created container: packageserver | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-cmbjq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30" in 2.453s (2.453s including waiting). Image size: 499489508 bytes. | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/mcn-guards-binding because it was missing | |
openshift-insights |
openshift-insights-operator |
insights-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/mcn-guards because it was missing | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-r6sfp | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd |
Created |
Created container: cluster-cloud-controller-manager | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-qbmw5 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f" in 6.168s (6.168s including waiting). Image size: 508404525 bytes. | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-r6sfp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-r6sfp |
Started |
Started container machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-r6sfp |
Created |
Created container: machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-r6sfp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-r6sfp |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-qbmw5 |
Created |
Created container: cluster-storage-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-r6sfp |
Created |
Created container: kube-rbac-proxy | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" in 5.681s (5.681s including waiting). Image size: 552251951 bytes. | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-qbmw5 |
Started |
Started container cluster-storage-operator | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd |
Started |
Started container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd |
Created |
Created container: config-sync-controllers | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-75b869db96-qbmw5_3e76d6c0-c13c-4751-9b12-1af45f8f1cb5 became leader | |
openshift-cloud-controller-manager-operator |
master-0_2c90c2a6-29f9-4d45-a9a7-8dc695a0f2ce |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_2c90c2a6-29f9-4d45-a9a7-8dc695a0f2ce became leader | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd |
Started |
Started container config-sync-controllers | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorVersionChanged |
clusteroperator/storage version "operator" changed from "" to "4.18.32" | |
openshift-cloud-controller-manager-operator |
master-0_17c3e8b0-cf7d-4515-b572-a6556dd34c77 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_17c3e8b0-cf7d-4515-b572-a6556dd34c77 became leader | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded changed from Unknown to False ("All is well") | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: No default StorageClass for this platform"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"}],status.versions changed from [] to [{"operator" "4.18.32"}] | |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd |
Created |
Created container: kube-rbac-proxy |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd |
Started |
Started container kube-rbac-proxy |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-puller-binding -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/machine-configuration-guards-binding because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing | |
| (x6) | openshift-cluster-machine-approver |
kubelet |
machine-approver-6c46d95f74-nsmfx |
FailedMount |
MountVolume.SetUp failed for volume "machine-approver-tls" : secret "machine-approver-tls" not found |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/custom-machine-config-pool-selector because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/managed-bootimages-platform-check because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/machine-configuration-guards because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-puller -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/custom-machine-config-pool-selector-binding because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/managed-bootimages-platform-check-binding because it was missing | |
openshift-machine-config-operator |
deployment-controller |
machine-config-controller |
ScalingReplicaSet |
Scaled up replica set machine-config-controller-686c884b4d to 1 | |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd |
BackOff |
Back-off restarting failed container kube-rbac-proxy in pod cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd_openshift-cloud-controller-manager-operator(317bc9db-ab82-4df1-81da-1a091f88acb1) |
openshift-machine-config-operator |
replicaset-controller |
machine-config-controller-686c884b4d |
SuccessfulCreate |
Created pod: machine-config-controller-686c884b4d-5q97f | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-5q97f |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-machine-config-operator |
multus |
machine-config-controller-686c884b4d-5q97f |
AddedInterface |
Add eth0 [10.128.0.68/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-5q97f |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-5q97f |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-5q97f |
Started |
Started container machine-config-controller | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-5q97f |
Created |
Created container: machine-config-controller | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-5q97f |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29522340-8cp6h |
Started |
Started container collect-profiles | |
openshift-network-diagnostics |
kubelet |
network-check-source-7d8f4c8c66-fc8n7 |
Created |
Created container: check-endpoints | |
openshift-network-diagnostics |
kubelet |
network-check-source-7d8f4c8c66-fc8n7 |
Started |
Started container check-endpoints | |
openshift-network-diagnostics |
multus |
network-check-source-7d8f4c8c66-fc8n7 |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29522340-8cp6h |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29522340-8cp6h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29522340-8cp6h |
AddedInterface |
Add eth0 [10.128.0.71/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-695b766898-nm8rs |
AddedInterface |
Add eth0 [10.128.0.69/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-nm8rs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f" | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-695b766898-nm8rs |
AddedInterface |
Add eth0 [10.128.0.69/23] from ovn-kubernetes | |
openshift-network-diagnostics |
kubelet |
network-check-source-7d8f4c8c66-fc8n7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine | |
openshift-ingress |
kubelet |
router-default-864ddd5f56-g8w2f |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb" | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-nm8rs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f" | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled down replica set machine-approver-6c46d95f74 to 0 from 1 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-6c46d95f74 |
SuccessfulDelete |
Deleted pod: machine-approver-6c46d95f74-nsmfx | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-nm8rs |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-server |
SuccessfulCreate |
Created pod: machine-config-server-l576h | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-7485d645b8 |
SuccessfulCreate |
Created pod: prometheus-operator-7485d645b8-nzz2j | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-7485d645b8 |
SuccessfulCreate |
Created pod: prometheus-operator-7485d645b8-nzz2j | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-nm8rs |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-nm8rs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f" in 1.574s (1.574s including waiting). Image size: 439402958 bytes. | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-nm8rs |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-nm8rs |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-monitoring |
deployment-controller |
prometheus-operator |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-7485d645b8 to 1 | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-nm8rs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f" in 1.574s (1.574s including waiting). Image size: 439402958 bytes. | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing | |
openshift-monitoring |
deployment-controller |
prometheus-operator |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-7485d645b8 to 1 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-server-l576h |
Started |
Started container machine-config-server | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-8569dd85ff to 1 | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-8569dd85ff |
SuccessfulCreate |
Created pod: machine-approver-8569dd85ff-f9g8s | |
openshift-machine-config-operator |
kubelet |
machine-config-server-l576h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-server-l576h |
Created |
Created container: machine-config-server | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n default because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
master |
RenderedConfigGenerated |
rendered-master-df68dbacb4242702506774288173e62e successfully generated (release version: 4.18.32, controller version: 0b0569287da3daea19bf47aa298037ccb4cbff98) | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
worker |
RenderedConfigGenerated |
rendered-worker-9756454e8727157d80899d278483e5d2 successfully generated (release version: 4.18.32, controller version: 0b0569287da3daea19bf47aa298037ccb4cbff98) | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: RequiredPoolsFailed |
Unable to apply 4.18.32: error during syncRequiredMachineConfigPools: context deadline exceeded | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29522340 |
Completed |
Job completed | |
openshift-network-node-identity |
master-0_d59ff3d7-aeca-48d2-a7f5-f43bba258aaa |
ovnkube-identity |
LeaderElection |
master-0_d59ff3d7-aeca-48d2-a7f5-f43bba258aaa became leader | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29522340, condition: Complete | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-builder -n openshift-machine-config-operator because it was missing | |
| (x2) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config started a version change from [] to [{operator 4.18.32} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42}] |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder-anyuid because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/state=Done | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-df68dbacb4242702506774288173e62e | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-df68dbacb4242702506774288173e62e | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-5b487c8bfc |
SuccessfulDelete |
Deleted pod: cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd |
Killing |
Stopping container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-5b487c8bfc-jdktd |
Killing |
Stopping container config-sync-controllers | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled down replica set cluster-cloud-controller-manager-operator-5b487c8bfc to 0 from 1 | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-6fb8ffcd9b to 1 | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-6fb8ffcd9b |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" already present on machine | |
openshift-ingress |
kubelet |
router-default-864ddd5f56-g8w2f |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb" in 10.541s (10.541s including waiting). Image size: 481879166 bytes. | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
Started |
Started container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
Created |
Created container: cluster-cloud-controller-manager | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
Created |
Created container: config-sync-controllers | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
Started |
Started container config-sync-controllers | |
openshift-ingress |
kubelet |
router-default-864ddd5f56-g8w2f |
Started |
Started container router | |
openshift-ingress |
kubelet |
router-default-864ddd5f56-g8w2f |
Created |
Created container: router | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config version changed from [] to [{operator 4.18.32} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42}] | |
| (x10) | openshift-ingress |
kubelet |
router-default-864ddd5f56-g8w2f |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 |
| (x11) | openshift-ingress |
kubelet |
router-default-864ddd5f56-g8w2f |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
Uncordon |
Update completed for config rendered-master-df68dbacb4242702506774288173e62e and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
NodeDone |
Setting node master-0, currentConfig rendered-master-df68dbacb4242702506774288173e62e to Done | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master-0 now has machineconfiguration.openshift.io/reason= | |
openshift-machine-config-operator |
machineconfigdaemon |
master-0 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-df68dbacb4242702506774288173e62e | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine |
| (x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
Started |
Started container kube-rbac-proxy |
| (x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
Created |
Created container: kube-rbac-proxy |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-canary namespace | |
openshift-ingress-canary |
daemonset-controller |
ingress-canary |
SuccessfulCreate |
Created pod: ingress-canary-6bhf8 | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-catalogd |
catalogd-controller-manager-67bc7c997f-jdfsm_f43a2e7f-66aa-4924-bb47-a867f4ff1473 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-67bc7c997f-jdfsm_f43a2e7f-66aa-4924-bb47-a867f4ff1473 became leader | |
openshift-catalogd |
catalogd-controller-manager-67bc7c997f-jdfsm_f43a2e7f-66aa-4924-bb47-a867f4ff1473 |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-67bc7c997f-jdfsm_f43a2e7f-66aa-4924-bb47-a867f4ff1473 became leader | |
openshift-operator-controller |
operator-controller-controller-manager-85c9b89969-4n2ls_16209c56-8cb4-4300-a550-f9d022bb10dc |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-85c9b89969-4n2ls_16209c56-8cb4-4300-a550-f9d022bb10dc became leader | |
| (x9) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-p8hbc |
FailedMount |
MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : secret "cloud-credential-operator-serving-cert" not found |
| (x9) | openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-hr9g4 |
FailedMount |
MountVolume.SetUp failed for volume "samples-operator-tls" : secret "samples-operator-tls" not found |
| (x9) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "cluster-autoscaler-operator-cert" not found |
| (x9) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "cluster-autoscaler-operator-cert" not found |
| (x9) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : secret "machine-api-operator-tls" not found |
| (x9) | openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : secret "machine-api-operator-tls" not found |
| (x9) | openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : secret "prometheus-operator-tls" not found |
| (x9) | openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : secret "prometheus-operator-tls" not found |
| (x9) | openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-f9g8s |
FailedMount |
MountVolume.SetUp failed for volume "machine-approver-tls" : secret "machine-approver-tls" not found |
openshift-cloud-controller-manager-operator |
master-0_30f52023-4117-4bc9-ae38-b0633a770d65 |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_30f52023-4117-4bc9-ae38-b0633a770d65 became leader | |
openshift-cloud-controller-manager-operator |
master-0_5ce35587-7807-420c-aa37-2624fa4f1a44 |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_5ce35587-7807-420c-aa37-2624fa4f1a44 became leader | |
| (x9) | openshift-ingress-canary |
kubelet |
ingress-canary-6bhf8 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "canary-serving-cert" not found |
| (x3) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-nclxg |
BackOff |
Back-off restarting failed container ingress-operator in pod ingress-operator-c588d8cb4-nclxg_openshift-ingress-operator(22a30079-d7fc-49cf-882e-1c5022cb5bf6) |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-nclxg |
Created |
Created container: ingress-operator |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-nclxg |
Started |
Started container ingress-operator |
| (x3) | openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-nclxg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3" already present on machine |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_78b2d64f-e6ae-4bbf-a1cd-47dc9f2e6615 became leader | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-kl9jm | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-kl9jm | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-kl9jm |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-kl9jm |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-kl9jm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-kl9jm |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-kl9jm |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-kl9jm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-kl9jm |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-kl9jm |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-cluster-olm-operator |
cluster-olm-operator |
cluster-olm-operator-lock |
LeaderElection |
cluster-olm-operator-55b69c6c48-mzk89_584d9b0f-9922-4a0a-badd-4ae778f1e0c0 became leader | |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-676cd8b9b5-bfm5s_a0f49a9d-2cab-40a9-b237-c3ea1346984b became leader | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-755d954778-jrdqm_0328707b-bb61-45ee-90c8-5dd2a42255c3 became leader | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-6d678b8d67 to 1 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-6d678b8d67 |
SuccessfulCreate |
Created pod: multus-admission-controller-6d678b8d67-rzbff | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-6d678b8d67 to 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory") | |
openshift-multus |
replicaset-controller |
multus-admission-controller-6d678b8d67 |
SuccessfulCreate |
Created pod: multus-admission-controller-6d678b8d67-rzbff | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationUpdated |
Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-rzbff |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-rzbff |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-rzbff |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-rzbff |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-rzbff |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-7485d55966-wcpf8_ef6b415f-dd85-4f3f-9b6d-f63bfd0de491 became leader | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-rzbff |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-rzbff |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-rzbff |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-rzbff |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-multus |
multus |
multus-admission-controller-6d678b8d67-rzbff |
AddedInterface |
Add eth0 [10.128.0.74/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-rzbff |
Started |
Started container multus-admission-controller | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-rzbff |
Started |
Started container kube-rbac-proxy | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.41:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.41:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.41:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.41:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.41:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.41:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.41:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.41:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-multus |
multus |
multus-admission-controller-6d678b8d67-rzbff |
AddedInterface |
Add eth0 [10.128.0.74/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-rzbff |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" already present on machine | |
openshift-multus |
replicaset-controller |
multus-admission-controller-7c64d55f8 |
SuccessfulDelete |
Deleted pod: multus-admission-controller-7c64d55f8-fzfsp | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
Killing |
Stopping container multus-admission-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 2 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-54984b6678-p5mdv_1eaee897-2132-4f89-8607-03e82a746573 became leader | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
Killing |
Stopping container kube-rbac-proxy | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-7c64d55f8 to 0 from 1 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-7c64d55f8 |
SuccessfulDelete |
Deleted pod: multus-admission-controller-7c64d55f8-fzfsp | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-7c64d55f8 to 0 from 1 | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
Killing |
Stopping container kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-multus |
kubelet |
multus-admission-controller-7c64d55f8-fzfsp |
Killing |
Stopping container multus-admission-controller | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-p8hbc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13" | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-p8hbc |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4" | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-p8hbc |
Started |
Started container kube-rbac-proxy | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-cloud-credential-operator |
multus |
cloud-credential-operator-595c8f9ff-p8hbc |
AddedInterface |
Add eth0 [10.128.0.59/23] from ovn-kubernetes | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-p8hbc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-5 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-hr9g4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e446723bbab96c4e4662ff058d5eccba72d0c36d26c7b8b3f07183fa49d3ab9" | |
openshift-cluster-samples-operator |
multus |
cluster-samples-operator-f8cbff74c-hr9g4 |
AddedInterface |
Add eth0 [10.128.0.60/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
AddedInterface |
Add eth0 [10.128.0.62/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-7b87b97578-9fpgj_b6338549-e7a6-4130-a14e-36a7dfe21009 became leader | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
AddedInterface |
Add eth0 [10.128.0.62/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded changed from True to False ("All is well") | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-hr9g4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e446723bbab96c4e4662ff058d5eccba72d0c36d26c7b8b3f07183fa49d3ab9" in 2.156s (2.156s including waiting). Image size: 450350026 bytes. | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-hr9g4 |
Created |
Created container: cluster-samples-operator | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: " to "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded changed from False to True ("CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotclasses.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: ") | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e" | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-hr9g4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e446723bbab96c4e4662ff058d5eccba72d0c36d26c7b8b3f07183fa49d3ab9" already present on machine | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e" | |
openshift-kube-scheduler |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.75/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Created |
Created container: kube-rbac-proxy | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-hr9g4 |
Started |
Started container cluster-samples-operator | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-hr9g4 |
Created |
Created container: cluster-samples-operator-watch | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-hr9g4 |
Started |
Started container cluster-samples-operator-watch | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
openshift-cluster-samples-operator |
file-change-watchdog |
cluster-samples-operator |
FileChangeWatchdogStarted |
Started watching files for process cluster-samples-operator[2] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveWebhookTokenAuthenticator |
authentication-token webhook configuration status changed from false to true | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, + "authentication-token-webhook-config-file": []any{ + string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig"), + }, + "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, ... // 6 identical entries }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "gracefulTerminationDuration": string("15"), ... // 2 identical entries } | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller |
authentication-operator |
SecretCreated |
Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0217 15:03:37.900679 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0217 15:03:37.913283 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0217 15:03:37.913332 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0217 15:03:37.913342 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0217 15:03:37.923929 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0217 15:04:07.924752 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0217 15:04:21.928422 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
installer errors: installer: ving-cert", (string) (len=21) "user-serving-cert-000", (string) (len=21) "user-serving-cert-001", (string) (len=21) "user-serving-cert-002", (string) (len=21) "user-serving-cert-003", (string) (len=21) "user-serving-cert-004", (string) (len=21) "user-serving-cert-005", (string) (len=21) "user-serving-cert-006", (string) (len=21) "user-serving-cert-007", (string) (len=21) "user-serving-cert-008", (string) (len=21) "user-serving-cert-009" }, CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca", (string) (len=29) "control-plane-node-kubeconfig", (string) (len=26) "check-endpoints-kubeconfig" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0217 15:03:37.900679 1 cmd.go:413] Getting controller reference for node master-0 I0217 15:03:37.913283 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0217 15:03:37.913332 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0217 15:03:37.913342 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0217 15:03:37.923929 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0217 15:04:07.924752 1 cmd.go:524] Getting installer pods for node master-0 F0217 15:04:21.928422 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9" | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Created |
Created container: kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-p8hbc |
Started |
Started container cloud-credential-operator | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Started |
Started container cluster-autoscaler-operator | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Created |
Created container: cluster-autoscaler-operator | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e" in 4.108s (4.108s including waiting). Image size: 451204770 bytes. | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e" in 4.108s (4.108s including waiting). Image size: 451204770 bytes. | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-p8hbc |
Created |
Created container: cloud-credential-operator | |
openshift-machine-api |
cluster-autoscaler-operator-67fd9768b5-6dzpr_af7fb589-0d93-40f4-aeec-2456d024d188 |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-67fd9768b5-6dzpr_af7fb589-0d93-40f4-aeec-2456d024d188 became leader | |
openshift-machine-api |
multus |
machine-api-operator-bd7dd5c46-g6fgz |
AddedInterface |
Add eth0 [10.128.0.67/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9" | |
openshift-machine-api |
multus |
machine-api-operator-bd7dd5c46-g6fgz |
AddedInterface |
Add eth0 [10.128.0.67/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-machine-api |
cluster-autoscaler-operator-67fd9768b5-6dzpr_af7fb589-0d93-40f4-aeec-2456d024d188 |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-67fd9768b5-6dzpr_af7fb589-0d93-40f4-aeec-2456d024d188 became leader | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-p8hbc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13" in 6.94s (6.94s including waiting). Image size: 875178413 bytes. | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Started |
Started container cluster-autoscaler-operator | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Created |
Created container: cluster-autoscaler-operator | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 5" | |
openshift-kube-scheduler |
kubelet |
installer-4-master-0 |
Killing |
Stopping container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-5-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Created |
Created container: machine-api-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Created |
Created container: machine-api-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Started |
Started container machine-api-operator | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-5-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-scheduler |
multus |
installer-5-master-0 |
AddedInterface |
Add eth0 [10.128.0.76/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9" in 7.341s (7.341s including waiting). Image size: 857023173 bytes. | |
default |
machineapioperator |
machine-api |
Status upgrade |
Progressing towards operator: 4.18.32 | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Started |
Started container machine-api-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9" in 7.341s (7.341s including waiting). Image size: 857023173 bytes. | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler |
kubelet |
installer-5-master-0 |
Created |
Created container: installer | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-scheduler |
kubelet |
installer-5-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing | |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-kl9jm |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-kl9jm |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-f9g8s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
multus |
prometheus-operator-7485d645b8-nzz2j |
AddedInterface |
Add eth0 [10.128.0.72/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19c3c8392b72ccf9a518d1d60fab0fd1e58a05b544caa79eb11bb68f00981d9d" | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19c3c8392b72ccf9a518d1d60fab0fd1e58a05b544caa79eb11bb68f00981d9d" | |
openshift-monitoring |
multus |
prometheus-operator-7485d645b8-nzz2j |
AddedInterface |
Add eth0 [10.128.0.72/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-f9g8s |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d" | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-f9g8s |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-f9g8s |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
Created |
Created container: prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
Started |
Started container prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-2 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
Created |
Created container: prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19c3c8392b72ccf9a518d1d60fab0fd1e58a05b544caa79eb11bb68f00981d9d" in 1.491s (1.491s including waiting). Image size: 456399406 bytes. | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19c3c8392b72ccf9a518d1d60fab0fd1e58a05b544caa79eb11bb68f00981d9d" in 1.491s (1.491s including waiting). Image size: 456399406 bytes. | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
Started |
Started container prometheus-operator | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing | |
openshift-cluster-machine-approver |
master-0_6202ae3b-be42-44e0-a13c-14af9da3143b |
cluster-machine-approver-leader |
LeaderElection |
master-0_6202ae3b-be42-44e0-a13c-14af9da3143b became leader | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-f9g8s |
Started |
Started container machine-approver-controller | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-f9g8s |
Created |
Created container: machine-approver-controller | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-f9g8s |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d" in 1.968s (1.968s including waiting). Image size: 462065055 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/config has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 2 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" | |
openshift-monitoring |
deployment-controller |
kube-state-metrics |
ScalingReplicaSet |
Scaled up replica set kube-state-metrics-7cc9598d54 to 1 | |
openshift-monitoring |
replicaset-controller |
kube-state-metrics-7cc9598d54 |
SuccessfulCreate |
Created pod: kube-state-metrics-7cc9598d54-z7lzs | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-rttp2 | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : secret "openshift-state-metrics-tls" not found | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
replicaset-controller |
openshift-state-metrics-546cc7d765 |
SuccessfulCreate |
Created pod: openshift-state-metrics-546cc7d765-b4xl8 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing | |
openshift-monitoring |
deployment-controller |
openshift-state-metrics |
ScalingReplicaSet |
Scaled up replica set openshift-state-metrics-546cc7d765 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
deployment-controller |
openshift-state-metrics |
ScalingReplicaSet |
Scaled up replica set openshift-state-metrics-546cc7d765 to 1 | |
openshift-monitoring |
replicaset-controller |
openshift-state-metrics-546cc7d765 |
SuccessfulCreate |
Created pod: openshift-state-metrics-546cc7d765-b4xl8 | |
openshift-monitoring |
replicaset-controller |
kube-state-metrics-7cc9598d54 |
SuccessfulCreate |
Created pod: kube-state-metrics-7cc9598d54-z7lzs | |
openshift-monitoring |
deployment-controller |
kube-state-metrics |
ScalingReplicaSet |
Scaled up replica set kube-state-metrics-7cc9598d54 to 1 | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-rttp2 | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : secret "openshift-state-metrics-tls" not found | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
multus |
openshift-state-metrics-546cc7d765-b4xl8 |
AddedInterface |
Add eth0 [10.128.0.77/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e96b53e74d1b802c2e63544e4689c9d262e9c996902c6e8a7f3ca34b23fdd50" | |
openshift-monitoring |
multus |
kube-state-metrics-7cc9598d54-z7lzs |
AddedInterface |
Add eth0 [10.128.0.78/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
openshift-state-metrics-546cc7d765-b4xl8 |
AddedInterface |
Add eth0 [10.128.0.77/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
kube-state-metrics-7cc9598d54-z7lzs |
AddedInterface |
Add eth0 [10.128.0.78/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e96b53e74d1b802c2e63544e4689c9d262e9c996902c6e8a7f3ca34b23fdd50" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-1-retry-1-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Created |
Created container: kube-rbac-proxy-main | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Started |
Started container kube-rbac-proxy-self | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f08586dd67c2d3d21053a044138f1bbedceb0847f1af8c3aa76127d86135a58" | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f08586dd67c2d3d21053a044138f1bbedceb0847f1af8c3aa76127d86135a58" | |
openshift-kube-apiserver |
kubelet |
installer-1-retry-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
multus |
installer-1-retry-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.79/23] from ovn-kubernetes | |
openshift-monitoring |
deployment-controller |
telemeter-client |
ScalingReplicaSet |
Scaled up replica set telemeter-client-7fbdcd9689 to 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
deployment-controller |
telemeter-client |
ScalingReplicaSet |
Scaled up replica set telemeter-client-7fbdcd9689 to 1 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Started |
Started container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Created |
Created container: init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" in 4.799s (4.799s including waiting). Image size: 412516925 bytes. | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-1-retry-1-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-1-retry-1-master-0 |
Created |
Created container: installer | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-cd5474998-tckph_5ce2c354-e035-4698-a3af-c67eadf7f2f3 became leader | |
openshift-monitoring |
replicaset-controller |
telemeter-client-7fbdcd9689 |
SuccessfulCreate |
Created pod: telemeter-client-7fbdcd9689-spqtt | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" in 4.799s (4.799s including waiting). Image size: 412516925 bytes. | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Created |
Created container: init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Started |
Started container init-textfile | |
openshift-monitoring |
replicaset-controller |
telemeter-client-7fbdcd9689 |
SuccessfulCreate |
Created pod: telemeter-client-7fbdcd9689-spqtt | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/grpc-tls -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f08586dd67c2d3d21053a044138f1bbedceb0847f1af8c3aa76127d86135a58" in 1.975s (1.975s including waiting). Image size: 426804569 bytes. | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9899b0f08b1202d149e16f09616ee7b8f37e3cda642386d93a6d3f63d72a316b" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9899b0f08b1202d149e16f09616ee7b8f37e3cda642386d93a6d3f63d72a316b" | |
openshift-monitoring |
multus |
telemeter-client-7fbdcd9689-spqtt |
AddedInterface |
Add eth0 [10.128.0.80/23] from ovn-kubernetes | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/grpc-tls -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f08586dd67c2d3d21053a044138f1bbedceb0847f1af8c3aa76127d86135a58" in 1.975s (1.975s including waiting). Image size: 426804569 bytes. | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e96b53e74d1b802c2e63544e4689c9d262e9c996902c6e8a7f3ca34b23fdd50" in 4.419s (4.419s including waiting). Image size: 435381677 bytes. | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Created |
Created container: kube-state-metrics | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Started |
Started container kube-state-metrics | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Created |
Created container: kube-state-metrics | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e96b53e74d1b802c2e63544e4689c9d262e9c996902c6e8a7f3ca34b23fdd50" in 4.419s (4.419s including waiting). Image size: 435381677 bytes. | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Started |
Started container kube-state-metrics | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Created |
Created container: kube-rbac-proxy-main | |
openshift-monitoring |
multus |
telemeter-client-7fbdcd9689-spqtt |
AddedInterface |
Add eth0 [10.128.0.80/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Created |
Created container: kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
replicaset-controller |
metrics-server-f94977f65 |
SuccessfulCreate |
Created pod: metrics-server-f94977f65-sgf5z | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-f94977f65 to 1 | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Started |
Started container openshift-state-metrics | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Created |
Created container: openshift-state-metrics | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-f94977f65 to 1 | |
openshift-monitoring |
replicaset-controller |
metrics-server-f94977f65 |
SuccessfulCreate |
Created pod: metrics-server-f94977f65-sgf5z | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/metrics-server-aaauri1gstf68 -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/metrics-server-aaauri1gstf68 -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Created |
Created container: node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Created |
Created container: node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Created |
Created container: openshift-state-metrics | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
Started |
Started container openshift-state-metrics | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1a0aba9ead3a33353dc8a033699dfa4795f4050516677dad6ed4ac664094692" | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" | |
openshift-monitoring |
multus |
metrics-server-f94977f65-sgf5z |
AddedInterface |
Add eth0 [10.128.0.81/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Started |
Started container telemeter-client | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Created |
Created container: telemeter-client | |
openshift-monitoring |
multus |
metrics-server-f94977f65-sgf5z |
AddedInterface |
Add eth0 [10.128.0.81/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9899b0f08b1202d149e16f09616ee7b8f37e3cda642386d93a6d3f63d72a316b" in 2.183s (2.183s including waiting). Image size: 475358904 bytes. | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Created |
Created container: telemeter-client | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9899b0f08b1202d149e16f09616ee7b8f37e3cda642386d93a6d3f63d72a316b" in 2.183s (2.183s including waiting). Image size: 475358904 bytes. | |
openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1a0aba9ead3a33353dc8a033699dfa4795f4050516677dad6ed4ac664094692" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0217 15:03:37.900679 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0217 15:03:37.913283 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0217 15:03:37.913332 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0217 15:03:37.913342 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0217 15:03:37.923929 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0217 15:04:07.924752 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0217 15:04:21.928422 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Started |
Started container telemeter-client | |
openshift-kube-apiserver |
kubelet |
installer-1-retry-1-master-0 |
Killing |
Stopping container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1a0aba9ead3a33353dc8a033699dfa4795f4050516677dad6ed4ac664094692" in 2.183s (2.183s including waiting). Image size: 466257032 bytes. | |
openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1a0aba9ead3a33353dc8a033699dfa4795f4050516677dad6ed4ac664094692" in 2.183s (2.183s including waiting). Image size: 466257032 bytes. | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" in 1.964s (1.964s including waiting). Image size: 432739783 bytes. | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" in 1.964s (1.964s including waiting). Image size: 432739783 bytes. | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Created |
Created container: reload | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Created |
Created container: reload | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Started |
Started container reload | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
Created |
Created container: metrics-server | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Started |
Started container reload | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
Started |
Started container metrics-server | |
openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
Created |
Created container: metrics-server | |
openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
Started |
Started container metrics-server | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.82/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/config has changed" | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-6d4655d9cf-5f5g9_100c6d51-d108-493c-bd47-1cac277df48f became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-5dc4688546-sg75p_77b5d690-4dac-4e49-8f27-091d88ab4034 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.18.32"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-5f5f84757d-dsfkk_9e91aaca-08ba-4746-ae8d-384bbc6c4a78 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.18.32" | |
openshift-kube-apiserver |
kubelet |
installer-2-master-0 |
Killing |
Stopping container installer | |
| (x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
NeedsReinstall |
apiServices not installed |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallCheckFailed |
install timeout |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
| (x4) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
waiting for install components to report healthy |
openshift-kube-apiserver |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.83/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 3 triggered by "required secret/localhost-recovery-client-token has changed,required configmap/kube-controller-manager-pod has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-78ff47c7c5-xvzq9_d2dd78ba-cfb8-4682-bae6-bae8956487cb became leader | |
| (x4) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
AllRequirementsMet |
all requirements found, attempting install |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
| (x4) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallWaiting |
apiServices not installed |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-7c6bdb986f-fcnqs_9ca5b85c-5951-4adc-b93e-9108482a1e4c became leader | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.18.32"}] to [{"raw-internal" "4.18.32"} {"kube-controller-manager" "1.31.14"} {"operator" "4.18.32"}] | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.31.14" |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "operator" changed from "" to "4.18.32" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 2"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 2 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 3 triggered by "required secret/localhost-recovery-client-token has changed,required configmap/kube-controller-manager-pod has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "operator" changed from "" to "4.18.32" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.31.14" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.18.32"}] to [{"raw-internal" "4.18.32"} {"operator" "4.18.32"} {"kube-scheduler" "1.31.14"}] | |
openshift-kube-scheduler |
static-pod-installer |
installer-5-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 2 to 3 because node master-0 with revision 2 is the oldest | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
cert-recovery-controller |
openshift-kube-scheduler |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": tls: failed to verify certificate: x509: certificate is valid for kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, openshift, openshift.default, openshift.default.svc, openshift.default.svc.cluster.local, not localhost-recovery | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_9fcf51dd-b93a-439b-8a17-62047091e959 became leader | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 3"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-3-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-67bf55ccdd-pjm6n_c7419e44-164a-498f-a4e5-34538e217253 became leader | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
APIServiceCreated |
Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
APIServiceCreated |
Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded changed from False to True ("ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced") | |
openshift-kube-controller-manager |
multus |
installer-3-master-0 |
AddedInterface |
Add eth0 [10.128.0.84/23] from ovn-kubernetes | |
| (x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Created |
Created container: installer | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29522355 | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29522355-rfrsq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29522355 |
SuccessfulCreate |
Created pod: collect-profiles-29522355-rfrsq | |
| (x2) | openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-cmbjq |
Created |
Created container: insights-operator |
| (x2) | openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-cmbjq |
Started |
Started container insights-operator |
openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-cmbjq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30" already present on machine | |
openshift-insights |
openshift-insights-operator |
insights-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29522355-rfrsq |
AddedInterface |
Add eth0 [10.128.0.85/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29522355-rfrsq |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29522355-rfrsq |
Started |
Started container collect-profiles | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.91eb892c5ee87610,data.MTkyLjE2OC4zMi4xMA | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/etcd-endpoints has changed" | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29522355, condition: Complete | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29522355 |
Completed |
Job completed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 1 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing | |
openshift-ingress-canary |
kubelet |
ingress-canary-6bhf8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing | |
| (x25) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
BackOff |
Back-off restarting failed container kube-rbac-proxy in pod cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c_openshift-cloud-controller-manager-operator(14723cb7-2d96-42b7-b559-70386c4c841c) |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing | |
openshift-ingress-canary |
multus |
ingress-canary-6bhf8 |
AddedInterface |
Add eth0 [10.128.0.73/23] from ovn-kubernetes | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-ingress-canary |
kubelet |
ingress-canary-6bhf8 |
Created |
Created container: serve-healthcheck-canary | |
openshift-ingress-canary |
kubelet |
ingress-canary-6bhf8 |
Started |
Started container serve-healthcheck-canary | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-all-bundles-2 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-kube-controller-manager |
kubelet |
installer-3-master-0 |
Killing |
Stopping container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 4" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 1 to 2 because node master-0 with revision 1 is the oldest | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-2-master-0 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.86/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
openshift-etcd |
multus |
installer-2-master-0 |
AddedInterface |
Add eth0 [10.128.0.87/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine | |
openshift-etcd |
kubelet |
installer-2-master-0 |
Created |
Created container: installer | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
default |
apiserver |
openshift-kube-apiserver |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
default |
apiserver |
openshift-kube-apiserver |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
default |
apiserver |
openshift-kube-apiserver |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
default |
apiserver |
openshift-kube-apiserver |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
default |
apiserver |
openshift-kube-apiserver |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
default |
apiserver |
openshift-kube-apiserver |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
| (x11) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigPoolsFailed |
Failed to resync 4.18.32 because: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master": dial tcp 172.30.0.1:443: connect: connection refused |
default |
kubelet |
master-0 |
Starting |
Starting kubelet. | |
default |
kubelet |
master-0 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
| (x3) | default |
kubelet |
master-0 |
NodeHasSufficientPID |
Node master-0 status is now: NodeHasSufficientPID |
openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-nclxg |
Created |
Created container: ingress-operator | |
| (x3) | default |
kubelet |
master-0 |
NodeHasNoDiskPressure |
Node master-0 status is now: NodeHasNoDiskPressure |
| (x3) | default |
kubelet |
master-0 |
NodeHasSufficientMemory |
Node master-0 status is now: NodeHasSufficientMemory |
openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-nclxg |
Started |
Started container ingress-operator | |
openshift-ingress-operator |
kubelet |
ingress-operator-c588d8cb4-nclxg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-f9g8s |
FailedMount |
MountVolume.SetUp failed for volume "machine-approver-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-ingress |
kubelet |
router-default-864ddd5f56-g8w2f |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-p8hbc |
FailedMount |
MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-p8hbc |
FailedMount |
MountVolume.SetUp failed for volume "cco-trusted-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-ingress-canary |
kubelet |
ingress-canary-6bhf8 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-cmbjq |
FailedMount |
MountVolume.SetUp failed for volume "service-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-cmbjq |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-insights |
kubelet |
insights-operator-cb4f7b4cf-cmbjq |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-f8cbff74c-hr9g4 |
FailedMount |
MountVolume.SetUp failed for volume "samples-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-qbmw5 |
FailedMount |
MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-5q97f |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-5q97f |
FailedMount |
MountVolume.SetUp failed for volume "mcc-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-r6sfp |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-67d4dbd88b-szr25 |
FailedMount |
MountVolume.SetUp failed for volume "webhook-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-67d4dbd88b-szr25 |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-r6sfp |
FailedMount |
MountVolume.SetUp failed for volume "mcd-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-kmc95 |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-kmc95 |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-kmc95 |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-kube-scheduler |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_6693f137-88f0-46db-b6a2-389995d83f43 became leader | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-7kdb7 |
FailedMount |
MountVolume.SetUp failed for volume "service-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-7kdb7 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-f9g8s |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
prometheus-operator-7485d645b8-nzz2j |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
node-exporter-rttp2 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6978b88779-vp5tv |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-b9c8fdfbc-rh9v2 |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-b9c8fdfbc-rh9v2 |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-f9g8s |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-server-l576h |
FailedMount |
MountVolume.SetUp failed for volume "node-bootstrap-token" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-nm8rs |
FailedMount |
MountVolume.SetUp failed for volume "tls-certificates" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
FailedMount |
MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-server-l576h |
FailedMount |
MountVolume.SetUp failed for volume "certs" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-695b766898-nm8rs |
FailedMount |
MountVolume.SetUp failed for volume "tls-certificates" : failed to sync secret cache: timed out waiting for the condition | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_7f2da21c-ae5c-434e-bb8d-f9c1a3334ac2 became leader | |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
FailedMount |
MountVolume.SetUp failed for volume "secret-telemeter-client" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-client-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
openshift-state-metrics-546cc7d765-b4xl8 |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-client-certs" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
FailedMount |
MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-server-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
FailedMount |
MountVolume.SetUp failed for volume "client-ca-bundle" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-rzbff |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-rzbff |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
FailedMount |
MountVolume.SetUp failed for volume "client-ca-bundle" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
FailedMount |
MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-client-certs" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
FailedMount |
MountVolume.SetUp failed for volume "secret-metrics-server-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
FailedMount |
MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
FailedMount |
MountVolume.SetUp failed for volume "federate-client-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
FailedMount |
MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
kube-state-metrics-7cc9598d54-z7lzs |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
FailedMount |
MountVolume.SetUp failed for volume "serving-certs-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
FailedMount |
MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
FailedMount |
MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
FailedMount |
MountVolume.SetUp failed for volume "secret-telemeter-client" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
FailedMount |
MountVolume.SetUp failed for volume "federate-client-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
node-exporter-rttp2 |
FailedMount |
MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
node-exporter-rttp2 |
FailedMount |
MountVolume.SetUp failed for volume "node-exporter-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-client-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
FailedMount |
MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
FailedMount |
MountVolume.SetUp failed for volume "serving-certs-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
node-exporter-rttp2 |
FailedMount |
MountVolume.SetUp failed for volume "node-exporter-tls" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
node-exporter-rttp2 |
FailedMount |
MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
FailedMount |
MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition |
| (x2) | openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
FailedMount |
MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
| (x17) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerStuck |
unexpected addresses: 192.168.32.10 |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
Started |
Started container kube-rbac-proxy | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
Created |
Created container: kube-rbac-proxy | |
openshift-ingress |
kubelet |
router-default-864ddd5f56-g8w2f |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-ingress |
kubelet |
router-default-864ddd5f56-g8w2f |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed | |
| (x8) | openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "operator" changed from "" to "4.18.32" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.18.32"}] to [{"raw-internal" "4.18.32"} {"kube-apiserver" "1.31.14"} {"operator" "4.18.32"}] | |
| (x9) | openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.31.14" |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
openshift-etcd |
kubelet |
etcd-master-0 |
Killing |
Stopping container etcdctl | |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Liveness probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused body: |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Liveness probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: setup | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container setup | |
| (x9) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Unhealthy |
Readiness probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused |
| (x10) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
ProbeError |
Readiness probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused body: |
| (x12) | openshift-oauth-apiserver |
kubelet |
apiserver-865765995-c58rq |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 |
| (x13) | openshift-oauth-apiserver |
kubelet |
apiserver-865765995-c58rq |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [-]etcd failed: reason withheld [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok livez check failed |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-wqxmh |
ProbeError |
Liveness probe error: Get "http://10.128.0.14:8080/healthz": dial tcp 10.128.0.14:8080: connect: connection refused body: |
| (x2) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.36:8081/healthz": dial tcp 10.128.0.36:8081: connect: connection refused |
| (x2) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.36:8081/healthz": dial tcp 10.128.0.36:8081: connect: connection refused |
| (x2) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
ProbeError |
Liveness probe error: Get "http://10.128.0.36:8081/healthz": dial tcp 10.128.0.36:8081: connect: connection refused body: |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-wqxmh |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.14:8080/healthz": dial tcp 10.128.0.14:8080: connect: connection refused |
| (x2) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
ProbeError |
Liveness probe error: Get "http://10.128.0.36:8081/healthz": dial tcp 10.128.0.36:8081: connect: connection refused body: |
| (x5) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
ProbeError |
Readiness probe error: Get "http://10.128.0.36:8081/readyz": dial tcp 10.128.0.36:8081: connect: connection refused body: |
| (x2) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-4n2ls |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.39:8081/readyz": dial tcp 10.128.0.39:8081: connect: connection refused |
| (x5) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.36:8081/readyz": dial tcp 10.128.0.36:8081: connect: connection refused |
| (x5) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
ProbeError |
Readiness probe error: Get "http://10.128.0.36:8081/readyz": dial tcp 10.128.0.36:8081: connect: connection refused body: |
| (x2) | openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-4n2ls |
ProbeError |
Readiness probe error: Get "http://10.128.0.39:8081/readyz": dial tcp 10.128.0.39:8081: connect: connection refused body: |
| (x5) | openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.36:8081/readyz": dial tcp 10.128.0.36:8081: connect: connection refused |
| (x4) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-wqxmh |
ProbeError |
Readiness probe error: Get "http://10.128.0.14:8080/healthz": dial tcp 10.128.0.14:8080: connect: connection refused body: |
| (x4) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-wqxmh |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.14:8080/healthz": dial tcp 10.128.0.14:8080: connect: connection refused |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" already present on machine | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-xwftw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" already present on machine | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-4n2ls |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b" already present on machine | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" already present on machine | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" already present on machine | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Started |
Started container manager | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-4n2ls |
Started |
Started container manager | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
Started |
Started container control-plane-machine-set-operator | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Created |
Created container: manager | |
openshift-network-node-identity |
kubelet |
network-node-identity-xwftw |
Created |
Created container: approver | |
openshift-network-node-identity |
kubelet |
network-node-identity-xwftw |
Started |
Started container approver | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
Created |
Created container: control-plane-machine-set-operator | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
Started |
Started container control-plane-machine-set-operator | |
openshift-operator-controller |
kubelet |
operator-controller-controller-manager-85c9b89969-4n2ls |
Created |
Created container: manager | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-d8bf84b88-hmpc7 |
Created |
Created container: control-plane-machine-set-operator | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
Created |
Created container: cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
Started |
Started container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
Created |
Created container: config-sync-controllers | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-6fb8ffcd9b-v7m7c |
Started |
Started container config-sync-controllers | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Started |
Started container manager | |
openshift-catalogd |
kubelet |
catalogd-controller-manager-67bc7c997f-jdfsm |
Created |
Created container: manager | |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-b9c8fdfbc-rh9v2 |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.51:8443/healthz": dial tcp 10.128.0.51:8443: connect: connection refused |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-b9c8fdfbc-rh9v2 |
ProbeError |
Liveness probe error: Get "https://10.128.0.51:8443/healthz": dial tcp 10.128.0.51:8443: connect: connection refused body: |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-ensure-env-vars | |
| (x3) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
ProbeError |
Liveness probe error: Get "https://192.168.32.10:10259/healthz": dial tcp 192.168.32.10:10259: connect: connection refused body: |
| (x3) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Unhealthy |
Liveness probe failed: Get "https://192.168.32.10:10259/healthz": dial tcp 192.168.32.10:10259: connect: connection refused |
| (x9) | openshift-controller-manager |
kubelet |
controller-manager-b9c8fdfbc-rh9v2 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.51:8443/healthz": dial tcp 10.128.0.51:8443: connect: connection refused |
| (x9) | openshift-controller-manager |
kubelet |
controller-manager-b9c8fdfbc-rh9v2 |
ProbeError |
Readiness probe error: Get "https://10.128.0.51:8443/healthz": dial tcp 10.128.0.51:8443: connect: connection refused body: |
| (x8) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Unhealthy |
Readiness probe failed: Get "https://192.168.32.10:10259/healthz": dial tcp 192.168.32.10:10259: connect: connection refused |
| (x8) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
ProbeError |
Readiness probe error: Get "https://192.168.32.10:10259/healthz": dial tcp 192.168.32.10:10259: connect: connection refused body: |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-rj245 |
Started |
Started container ovnkube-cluster-manager | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-rj245 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-bb7ffbb8d-rj245 |
Created |
Created container: ovnkube-cluster-manager | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-f9g8s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-f9g8s |
Created |
Created container: machine-approver-controller | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-8569dd85ff-f9g8s |
Started |
Started container machine-approver-controller | |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6978b88779-vp5tv |
ProbeError |
Liveness probe error: Get "https://10.128.0.52:8443/healthz": dial tcp 10.128.0.52:8443: connect: connection refused body: |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6978b88779-vp5tv |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.52:8443/healthz": dial tcp 10.128.0.52:8443: connect: connection refused |
| (x3) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-t7n5b |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.15:8080/healthz": dial tcp 10.128.0.15:8080: connect: connection refused |
| (x3) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-t7n5b |
ProbeError |
Liveness probe error: Get "http://10.128.0.15:8080/healthz": dial tcp 10.128.0.15:8080: connect: connection refused body: |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-t7n5b |
Started |
Started container package-server-manager | |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-t7n5b |
ProbeError |
Readiness probe error: Get "http://10.128.0.15:8080/healthz": dial tcp 10.128.0.15:8080: connect: connection refused body: |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-t7n5b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-t7n5b |
Created |
Created container: package-server-manager | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" already present on machine | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-dtwmd |
Created |
Created container: cluster-image-registry-operator | |
| (x4) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-5c696dbdcd-t7n5b |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.15:8080/healthz": dial tcp 10.128.0.15:8080: connect: connection refused |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-dtwmd |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861" already present on machine | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
Started |
Started container cluster-node-tuning-operator | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
Created |
Created container: cluster-node-tuning-operator | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-96c8c64b8-dtwmd |
Started |
Started container cluster-image-registry-operator | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
Created |
Created container: cluster-node-tuning-operator | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-ff6c9b66-k8xp8 |
Started |
Started container cluster-node-tuning-operator | |
| (x3) | openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-fcnqs |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.22:8443/healthz": dial tcp 10.128.0.22:8443: connect: connection refused |
| (x3) | openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-fcnqs |
ProbeError |
Liveness probe error: Get "https://10.128.0.22:8443/healthz": dial tcp 10.128.0.22:8443: connect: connection refused body: |
| (x7) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6978b88779-vp5tv |
ProbeError |
Readiness probe error: Get "https://10.128.0.52:8443/healthz": dial tcp 10.128.0.52:8443: connect: connection refused body: |
| (x7) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6978b88779-vp5tv |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.52:8443/healthz": dial tcp 10.128.0.52:8443: connect: connection refused |
| (x6) | openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-fcnqs |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.22:8443/healthz": dial tcp 10.128.0.22:8443: connect: connection refused |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-kmc95 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e" already present on machine | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9" already present on machine | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-5q97f |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Created |
Created container: cluster-autoscaler-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-5q97f |
Created |
Created container: machine-config-controller | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-686c884b4d-5q97f |
Started |
Started container machine-config-controller | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Started |
Started container cluster-autoscaler-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Started |
Started container machine-api-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Created |
Created container: machine-api-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Started |
Started container machine-api-operator | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Started |
Started container cluster-autoscaler-operator | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-67fd9768b5-6dzpr |
Created |
Created container: cluster-autoscaler-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-bd7dd5c46-g6fgz |
Created |
Created container: machine-api-operator | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-kmc95 |
Started |
Started container machine-config-operator | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6978b88779-vp5tv |
Started |
Started container route-controller-manager | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-84976bb859-kmc95 |
Created |
Created container: machine-config-operator | |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6978b88779-vp5tv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38" already present on machine |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6978b88779-vp5tv |
Created |
Created container: route-controller-manager |
openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-pjm6n |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.12:8443/healthz": net/http: TLS handshake timeout | |
openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-pjm6n |
ProbeError |
Liveness probe error: Get "https://10.128.0.12:8443/healthz": net/http: TLS handshake timeout body: | |
| (x2) | openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-bfm5s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e" already present on machine |
openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-7kdb7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" already present on machine | |
| (x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-xvzq9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-p8hbc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13" already present on machine | |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-fcnqs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b" already present on machine |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-tckph |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144" already present on machine |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-9fpgj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1" already present on machine |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-tckph |
Created |
Created container: kube-storage-version-migrator-operator |
| (x2) | openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-bfm5s |
Started |
Started container service-ca-controller |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-9fpgj |
Created |
Created container: csi-snapshot-controller-operator |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-fcnqs |
Created |
Created container: openshift-config-operator |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-cd5474998-tckph |
Started |
Started container kube-storage-version-migrator-operator |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-7b87b97578-9fpgj |
Started |
Started container csi-snapshot-controller-operator |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-7kdb7 |
Created |
Created container: cluster-version-operator | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine | |
| (x2) | openshift-service-ca |
kubelet |
service-ca-676cd8b9b5-bfm5s |
Created |
Created container: service-ca-controller |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-metrics | |
openshift-cluster-version |
kubelet |
cluster-version-operator-649c4f5445-7kdb7 |
Started |
Started container cluster-version-operator | |
| (x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-xvzq9 |
Created |
Created container: kube-controller-manager-operator |
| (x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-78ff47c7c5-xvzq9 |
Started |
Started container kube-controller-manager-operator |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcdctl | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-p8hbc |
Started |
Started container cloud-credential-operator | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-595c8f9ff-p8hbc |
Created |
Created container: cloud-credential-operator | |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-fcnqs |
Started |
Started container openshift-config-operator |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master-0 |
Created |
Created container: etcd-rev | |
openshift-etcd |
kubelet |
etcd-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine | |
openshift-etcd |
kubelet |
etcd-master-0 |
Started |
Started container etcd-readyz | |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-pjm6n |
ProbeError |
Liveness probe error: Get "https://10.128.0.12:8443/healthz": dial tcp 10.128.0.12:8443: connect: connection refused body: |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-pjm6n |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.12:8443/healthz": dial tcp 10.128.0.12:8443: connect: connection refused |
| (x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-p5mdv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine |
| (x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-wcpf8 |
Started |
Started container kube-scheduler-operator-container |
| (x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-p5mdv |
Started |
Started container kube-apiserver-operator |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-pjm6n |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-pjm6n |
Created |
Created container: etcd-operator |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-67bf55ccdd-pjm6n |
Started |
Started container etcd-operator |
| (x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-wcpf8 |
Created |
Created container: kube-scheduler-operator-container |
| (x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7485d55966-wcpf8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine |
| (x7) | openshift-config-operator |
kubelet |
openshift-config-operator-7c6bdb986f-fcnqs |
ProbeError |
Readiness probe error: Get "https://10.128.0.22:8443/healthz": dial tcp 10.128.0.22:8443: connect: connection refused body: |
| (x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-54984b6678-p5mdv |
Created |
Created container: kube-apiserver-operator |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-jrdqm |
Started |
Started container authentication-operator |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-jrdqm |
ProbeError |
Liveness probe error: Get "https://10.128.0.24:8443/healthz": dial tcp 10.128.0.24:8443: connect: connection refused body: |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-jrdqm |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.24:8443/healthz": dial tcp 10.128.0.24:8443: connect: connection refused |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-qbmw5 |
BackOff |
Back-off restarting failed container cluster-storage-operator in pod cluster-storage-operator-75b869db96-qbmw5_openshift-cluster-storage-operator(ad81b5bd-2f97-4e7e-a12b-746998fa59f2) | |
openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-mzk89 |
BackOff |
Back-off restarting failed container cluster-olm-operator in pod cluster-olm-operator-55b69c6c48-mzk89_openshift-cluster-olm-operator(6c734c89-515e-4ff0-82d1-831ddaf0b99e) | |
openshift-network-operator |
kubelet |
network-operator-6fcf4c966-l24cg |
BackOff |
Back-off restarting failed container network-operator in pod network-operator-6fcf4c966-l24cg_openshift-network-operator(4fd2c79d-1e10-4f09-8a33-c66598abc99a) | |
| (x2) | openshift-network-operator |
kubelet |
network-operator-6fcf4c966-l24cg |
Started |
Started container network-operator |
| (x2) | openshift-network-operator |
kubelet |
network-operator-6fcf4c966-l24cg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine |
| (x2) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-dsfkk |
BackOff |
Back-off restarting failed container openshift-controller-manager-operator in pod openshift-controller-manager-operator-5f5f84757d-dsfkk_openshift-controller-manager-operator(c7ed6ff7-56ba-4806-9e09-b8ae6d79cfda) |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-sg75p |
BackOff |
Back-off restarting failed container service-ca-operator in pod service-ca-operator-5dc4688546-sg75p_openshift-service-ca-operator(65d9f008-7777-48fe-85fe-9d54a7bbcea9) |
| (x2) | openshift-network-operator |
kubelet |
network-operator-6fcf4c966-l24cg |
Created |
Created container: network-operator |
| (x2) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-mzk89 |
Started |
Started container cluster-olm-operator |
| (x2) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-qbmw5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f" already present on machine |
| (x2) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-mzk89 |
Created |
Created container: cluster-olm-operator |
| (x2) | openshift-cluster-olm-operator |
kubelet |
cluster-olm-operator-55b69c6c48-mzk89 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1" already present on machine |
| (x2) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-qbmw5 |
Created |
Created container: cluster-storage-operator |
| (x2) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-75b869db96-qbmw5 |
Started |
Started container cluster-storage-operator |
| (x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-5f5g9 |
BackOff |
Back-off restarting failed container openshift-apiserver-operator in pod openshift-apiserver-operator-6d4655d9cf-5f5g9_openshift-apiserver-operator(af61bda0-c7b4-489d-a671-eaa5299942fe) |
| (x3) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-sg75p |
Started |
Started container service-ca-operator |
| (x3) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-sg75p |
Created |
Created container: service-ca-operator |
| (x3) | openshift-service-ca-operator |
kubelet |
service-ca-operator-5dc4688546-sg75p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e" already present on machine |
| (x3) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-dsfkk |
Started |
Started container openshift-controller-manager-operator |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapUpdated |
Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt | |
| (x3) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-dsfkk |
Created |
Created container: openshift-controller-manager-operator |
| (x3) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-5f5f84757d-dsfkk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88" already present on machine |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapUpdated |
Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
| (x3) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
BackOff |
Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-7bc947fc7d-8qkdw_openshift-machine-api(7307f70e-ee5b-4f81-8155-718a02c9efe7) |
| (x3) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
BackOff |
Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-7bc947fc7d-8qkdw_openshift-machine-api(7307f70e-ee5b-4f81-8155-718a02c9efe7) |
| (x3) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-5f5g9 |
Created |
Created container: openshift-apiserver-operator |
| (x3) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-5f5g9 |
Started |
Started container openshift-apiserver-operator |
| (x3) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6d4655d9cf-5f5g9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5" already present on machine |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
install strategy completed with no errors | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/metrics-server-flbia8i8i4eih -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/metrics-server-flbia8i8i4eih -n openshift-monitoring because it was missing | |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-jrdqm |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.24:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x2) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-jrdqm |
Killing |
Container authentication-operator failed liveness probe, will be restarted |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-jrdqm |
ProbeError |
Liveness probe error: Get "https://10.128.0.24:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-jrdqm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963" already present on machine |
| (x4) | openshift-authentication-operator |
kubelet |
authentication-operator-755d954778-jrdqm |
Created |
Created container: authentication-operator |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-b9c8fdfbc-rh9v2 became leader | |
openshift-ovn-kubernetes |
ovnk-controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-bb7ffbb8d-rj245 became leader | |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Created |
Created container: cluster-baremetal-operator |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49" already present on machine |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Started |
Started container cluster-baremetal-operator |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Started |
Started container cluster-baremetal-operator |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Created |
Created container: cluster-baremetal-operator |
| (x4) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7bc947fc7d-8qkdw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49" already present on machine |
| (x8) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-q4766 |
BackOff |
Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-74b6595c6d-q4766_openshift-cluster-storage-operator(129dba1e-73df-4ea4-96c0-3eba78d568ba) |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-grpc-tls-eu11557dmf9qt -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-grpc-tls-eu11557dmf9qt -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-grpc-tls-7d1hat1ob2dke -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-grpc-tls-7d1hat1ob2dke -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_29f087c8-5127-49d1-814c-c2b888d9af53 became leader | |
openshift-monitoring |
replicaset-controller |
metrics-server-f94977f65 |
SuccessfulDelete |
Deleted pod: metrics-server-f94977f65-sgf5z | |
openshift-monitoring |
replicaset-controller |
thanos-querier-85c85bc675 |
SuccessfulCreate |
Created pod: thanos-querier-85c85bc675-62rqj | |
openshift-monitoring |
replicaset-controller |
thanos-querier-85c85bc675 |
SuccessfulCreate |
Created pod: thanos-querier-85c85bc675-62rqj | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
replicaset-controller |
metrics-server-75c4d5b7f |
SuccessfulCreate |
Created pod: metrics-server-75c4d5b7f-t6zcq | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
deployment-controller |
thanos-querier |
ScalingReplicaSet |
Scaled up replica set thanos-querier-85c85bc675 to 1 | |
openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
Killing |
Stopping container metrics-server | |
openshift-monitoring |
deployment-controller |
thanos-querier |
ScalingReplicaSet |
Scaled up replica set thanos-querier-85c85bc675 to 1 | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
Killing |
Stopping container metrics-server | |
openshift-monitoring |
replicaset-controller |
metrics-server-f94977f65 |
SuccessfulDelete |
Deleted pod: metrics-server-f94977f65-sgf5z | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-75c4d5b7f to 1 | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled down replica set metrics-server-f94977f65 to 0 from 1 | |
openshift-monitoring |
replicaset-controller |
metrics-server-75c4d5b7f |
SuccessfulCreate |
Created pod: metrics-server-75c4d5b7f-t6zcq | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-75c4d5b7f to 1 | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled down replica set metrics-server-f94977f65 to 0 from 1 | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Killing |
Stopping container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Killing |
Stopping container reload | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Killing |
Stopping container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Killing |
Stopping container telemeter-client | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Killing |
Stopping container reload | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-spqtt |
Killing |
Stopping container telemeter-client | |
| (x5) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-q4766 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c" already present on machine |
| (x5) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-q4766 |
Created |
Created container: snapshot-controller |
| (x5) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-74b6595c6d-q4766 |
Started |
Started container snapshot-controller |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-74b6595c6d-q4766 |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-74b6595c6d-q4766 became leader | |
openshift-network-node-identity |
master-0_16442bc9-0b5d-4917-808b-ddb25da04f1b |
ovnkube-identity |
LeaderElection |
master-0_16442bc9-0b5d-4917-808b-ddb25da04f1b became leader | |
| (x9) | openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
FailedMount |
MountVolume.SetUp failed for volume "client-ca-bundle" : secret "metrics-server-aaauri1gstf68" not found |
| (x9) | openshift-monitoring |
kubelet |
metrics-server-f94977f65-sgf5z |
FailedMount |
MountVolume.SetUp failed for volume "client-ca-bundle" : secret "metrics-server-aaauri1gstf68" not found |
openshift-cloud-controller-manager-operator |
master-0_77454302-a61c-4f6a-9cca-b2a7c3f529e5 |
cluster-cloud-controller-manager-leader |
LeaderElection |
master-0_77454302-a61c-4f6a-9cca-b2a7c3f529e5 became leader | |
openshift-machine-api |
cluster-baremetal-operator-7bc947fc7d-8qkdw_b897f620-c527-412a-bf96-26144774a9f9 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-7bc947fc7d-8qkdw_b897f620-c527-412a-bf96-26144774a9f9 became leader | |
openshift-machine-api |
cluster-baremetal-operator-7bc947fc7d-8qkdw_b897f620-c527-412a-bf96-26144774a9f9 |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-7bc947fc7d-8qkdw_b897f620-c527-412a-bf96-26144774a9f9 became leader | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_e8bc4263-e3c2-4160-914e-3a68a782f137 became leader | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_bd4990d6-17b5-4158-a901-af1f87e95078 became leader | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" | |
openshift-monitoring |
multus |
metrics-server-75c4d5b7f-t6zcq |
AddedInterface |
Add eth0 [10.128.0.88/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
thanos-querier-85c85bc675-62rqj |
AddedInterface |
Add eth0 [10.128.0.89/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
metrics-server-75c4d5b7f-t6zcq |
Created |
Created container: metrics-server | |
openshift-monitoring |
multus |
thanos-querier-85c85bc675-62rqj |
AddedInterface |
Add eth0 [10.128.0.89/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
metrics-server-75c4d5b7f-t6zcq |
Started |
Started container metrics-server | |
openshift-monitoring |
kubelet |
metrics-server-75c4d5b7f-t6zcq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1a0aba9ead3a33353dc8a033699dfa4795f4050516677dad6ed4ac664094692" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" | |
openshift-monitoring |
kubelet |
metrics-server-75c4d5b7f-t6zcq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1a0aba9ead3a33353dc8a033699dfa4795f4050516677dad6ed4ac664094692" already present on machine | |
openshift-monitoring |
kubelet |
metrics-server-75c4d5b7f-t6zcq |
Started |
Started container metrics-server | |
openshift-monitoring |
kubelet |
metrics-server-75c4d5b7f-t6zcq |
Created |
Created container: metrics-server | |
openshift-monitoring |
multus |
metrics-server-75c4d5b7f-t6zcq |
AddedInterface |
Add eth0 [10.128.0.88/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Started |
Started container thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" in 2.357s (2.357s including waiting). Image size: 497535620 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Created |
Created container: thanos-query | |
openshift-operator-controller |
operator-controller-controller-manager-85c9b89969-4n2ls_d8d585f6-75ff-4e71-9f30-7467b72b9bb3 |
9c4404e7.operatorframework.io |
LeaderElection |
operator-controller-controller-manager-85c9b89969-4n2ls_d8d585f6-75ff-4e71-9f30-7467b72b9bb3 became leader | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" in 2.357s (2.357s including waiting). Image size: 497535620 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Created |
Created container: thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Started |
Started container thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Started |
Started container kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Started |
Started container kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Created |
Created container: kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Started |
Started container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Created |
Created container: kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Created |
Created container: kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" in 1.103s (1.103s including waiting). Image size: 407929286 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Started |
Started container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Created |
Created container: kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-85c85bc675-62rqj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" in 1.103s (1.103s including waiting). Image size: 407929286 bytes. | |
openshift-kube-scheduler |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_6ff9ba5b-fe51-4273-9950-280f666e2301 became leader | |
openshift-cloud-controller-manager-operator |
master-0_1deb7401-d6a4-4319-a7e8-0b8f0c52e45c |
cluster-cloud-config-sync-leader |
LeaderElection |
master-0_1deb7401-d6a4-4319-a7e8-0b8f0c52e45c became leader | |
openshift-cluster-machine-approver |
master-0_8b0ba511-915a-4742-ba49-a25682ecf16e |
cluster-machine-approver-leader |
LeaderElection |
master-0_8b0ba511-915a-4742-ba49-a25682ecf16e became leader | |
openshift-machine-api |
control-plane-machine-set-operator-d8bf84b88-hmpc7_4f3c66e5-877b-4a22-8d10-ce07b33e40ba |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-d8bf84b88-hmpc7_4f3c66e5-877b-4a22-8d10-ce07b33e40ba became leader | |
openshift-machine-api |
control-plane-machine-set-operator-d8bf84b88-hmpc7_4f3c66e5-877b-4a22-8d10-ce07b33e40ba |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-d8bf84b88-hmpc7_4f3c66e5-877b-4a22-8d10-ce07b33e40ba became leader | |
openshift-catalogd |
catalogd-controller-manager-67bc7c997f-jdfsm_4ede9521-8e5b-4cda-a57b-0561dffa23dc |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-67bc7c997f-jdfsm_4ede9521-8e5b-4cda-a57b-0561dffa23dc became leader | |
openshift-catalogd |
catalogd-controller-manager-67bc7c997f-jdfsm_4ede9521-8e5b-4cda-a57b-0561dffa23dc |
catalogd-operator-lock |
LeaderElection |
catalogd-controller-manager-67bc7c997f-jdfsm_4ede9521-8e5b-4cda-a57b-0561dffa23dc became leader | |
| (x8) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" : configmap references non-existent config key: ca-bundle.crt |
| (x8) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" : secret "prometheus-k8s-tls" not found |
| (x8) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" : secret "prometheus-k8s-tls" not found |
| (x8) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" : configmap references non-existent config key: ca-bundle.crt |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-7485d55966-wcpf8_1fb46346-89cb-490d-8636-5458d015d88d became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master-0_8ed2c7f9-f021-4a07-a370-3849836d2732 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5") | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" architecture="amd64" | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-ff6c9b66-k8xp8_014df7b7-5226-4d7a-b4b1-24a0d3ccf0ce |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-ff6c9b66-k8xp8_014df7b7-5226-4d7a-b4b1-24a0d3ccf0ce became leader | |
openshift-operator-lifecycle-manager |
package-server-manager-5c696dbdcd-t7n5b_e05fa2f6-3097-4a81-bea6-e5d20580618d |
packageserver-controller-lock |
LeaderElection |
package-server-manager-5c696dbdcd-t7n5b_e05fa2f6-3097-4a81-bea6-e5d20580618d became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 0 to 5 because static pod is ready | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-ff6c9b66-k8xp8_014df7b7-5226-4d7a-b4b1-24a0d3ccf0ce |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-ff6c9b66-k8xp8_014df7b7-5226-4d7a-b4b1-24a0d3ccf0ce became leader | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-96c8c64b8-dtwmd_166f125b-4bd0-4050-8bf8-80cdbac26730 became leader | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
DaemonSetCreated |
Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-knz2d | |
openshift-image-registry |
kubelet |
node-ca-knz2d |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc2817e5b16d83dac91d1a274fb93521165953e9bdc28f3073b127eacc5a534e" | |
openshift-image-registry |
kubelet |
node-ca-knz2d |
Created |
Created container: node-ca | |
openshift-image-registry |
kubelet |
node-ca-knz2d |
Started |
Started container node-ca | |
openshift-image-registry |
kubelet |
node-ca-knz2d |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc2817e5b16d83dac91d1a274fb93521165953e9bdc28f3073b127eacc5a534e" in 1.954s (1.954s including waiting). Image size: 476466823 bytes. | |
| (x9) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" : secret "prometheus-k8s-thanos-sidecar-tls" not found |
| (x9) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" : secret "alertmanager-main-tls" not found |
| (x9) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" : secret "alertmanager-main-tls" not found |
| (x9) | openshift-monitoring |
kubelet |
prometheus-k8s-0 |
FailedMount |
MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" : secret "prometheus-k8s-thanos-sidecar-tls" not found |
| (x9) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" : configmap references non-existent config key: ca-bundle.crt |
| (x9) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedMount |
MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" : configmap references non-existent config key: ca-bundle.crt |
openshift-cluster-olm-operator |
cluster-olm-operator |
cluster-olm-operator-lock |
LeaderElection |
cluster-olm-operator-55b69c6c48-mzk89_e9b390f4-e562-4dfa-bbf1-c9ca1e16e434 became leader | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console-operator namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console-user-settings namespace | |
openshift-cluster-olm-operator |
CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources |
cluster-olm-operator |
MutatingWebhookConfigurationUpdated |
Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-cd5474998-tckph_319a3905-70d5-4181-84ec-97582dafd07a became leader | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-676cd8b9b5-bfm5s_ef592060-1722-441f-adc4-7877dd6d8550 became leader | |
openshift-machine-config-operator |
machine-config-operator |
master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-console-operator |
deployment-controller |
console-operator |
ScalingReplicaSet |
Scaled up replica set console-operator-7777d5cc66 to 1 | |
openshift-console-operator |
replicaset-controller |
console-operator-7777d5cc66 |
SuccessfulCreate |
Created pod: console-operator-7777d5cc66-w62mx | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-5dc4688546-sg75p_a1d530cd-b24f-4270-91f4-5112e4978a8f became leader | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-7c6bdb986f-fcnqs_f8b864b1-901c-4502-876a-fec315f52ec3 became leader | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-6978b88779-vp5tv_a74d9d79-794f-43eb-bfc0-3b2a906e98af became leader | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29522370 |
SuccessfulCreate |
Created pod: collect-profiles-29522370-xqzfs | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29522370 | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29522370-xqzfs |
AddedInterface |
Add eth0 [10.128.0.93/23] from ovn-kubernetes | |
openshift-machine-api |
cluster-autoscaler-operator-67fd9768b5-6dzpr_c1ff2aab-1782-48d9-890f-4b4ef661872e |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-67fd9768b5-6dzpr_c1ff2aab-1782-48d9-890f-4b4ef661872e became leader | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29522370-xqzfs |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-machine-api |
cluster-autoscaler-operator-67fd9768b5-6dzpr_c1ff2aab-1782-48d9-890f-4b4ef661872e |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-67fd9768b5-6dzpr_c1ff2aab-1782-48d9-890f-4b4ef661872e became leader | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29522370-xqzfs |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29522370-xqzfs |
Started |
Started container collect-profiles | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-67bf55ccdd-pjm6n_421ff5f9-2e74-4195-90b2-92dff1f8efc6 became leader | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29522370 |
Completed |
Job completed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29522370, condition: Complete | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 1 to 2 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-54984b6678-p5mdv_cddbaf82-1fe4-4c7f-9726-4768430d9038 became leader | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x2) | openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client -n openshift-monitoring because it was missing |
| (x2) | openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client -n openshift-monitoring because it was missing |
| (x2) | openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing |
| (x2) | openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/telemeter-client -n openshift-monitoring because it was missing |
| (x2) | openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing |
| (x2) | openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing |
| (x2) | openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing |
| (x2) | openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing |
| (x2) | openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/telemeter-client -n openshift-monitoring because it was missing |
| (x2) | openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing |
| (x2) | openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing |
| (x2) | openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded changed from False to True ("APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()"),Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/template.openshift.io/v1: 401" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-6d4655d9cf-5f5g9_f410a11f-7743-49c6-ba4b-271ed02919c0 became leader | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master-0_34e40c47-1c25-4b55-9a22-45b354291b7e became leader | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node." | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-c5mq6 |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-c5mq6 | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-c5mq6 |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-c5mq6 | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-c5mq6 |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-c5mq6 |
Created |
Created container: kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-c5mq6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-c5mq6 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
replicaset-controller |
telemeter-client-7fbdcd9689 |
SuccessfulCreate |
Created pod: telemeter-client-7fbdcd9689-jnzwg | |
openshift-monitoring |
deployment-controller |
monitoring-plugin |
ScalingReplicaSet |
Scaled up replica set monitoring-plugin-6f86647c68 to 1 | |
openshift-monitoring |
replicaset-controller |
monitoring-plugin-6f86647c68 |
SuccessfulCreate |
Created pod: monitoring-plugin-6f86647c68-r4plh | |
openshift-monitoring |
replicaset-controller |
telemeter-client-7fbdcd9689 |
SuccessfulCreate |
Created pod: telemeter-client-7fbdcd9689-jnzwg | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3") | |
openshift-monitoring |
deployment-controller |
monitoring-plugin |
ScalingReplicaSet |
Scaled up replica set monitoring-plugin-6f86647c68 to 1 | |
openshift-monitoring |
deployment-controller |
telemeter-client |
ScalingReplicaSet |
Scaled up replica set telemeter-client-7fbdcd9689 to 1 | |
openshift-monitoring |
replicaset-controller |
monitoring-plugin-6f86647c68 |
SuccessfulCreate |
Created pod: monitoring-plugin-6f86647c68-r4plh | |
openshift-monitoring |
deployment-controller |
telemeter-client |
ScalingReplicaSet |
Scaled up replica set telemeter-client-7fbdcd9689 to 1 | |
openshift-monitoring |
kubelet |
monitoring-plugin-6f86647c68-r4plh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaa92509b71c898caed43ac2b5d3b3fc44fff333855789eb1d7df15f08e91ea3" | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-jnzwg |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
multus |
monitoring-plugin-6f86647c68-r4plh |
AddedInterface |
Add eth0 [10.128.0.94/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-jnzwg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-jnzwg |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
multus |
telemeter-client-7fbdcd9689-jnzwg |
AddedInterface |
Add eth0 [10.128.0.95/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-jnzwg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9899b0f08b1202d149e16f09616ee7b8f37e3cda642386d93a6d3f63d72a316b" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-jnzwg |
Created |
Created container: telemeter-client | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-jnzwg |
Started |
Started container telemeter-client | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-jnzwg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-jnzwg |
Created |
Created container: reload | |
openshift-monitoring |
multus |
telemeter-client-7fbdcd9689-jnzwg |
AddedInterface |
Add eth0 [10.128.0.95/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
monitoring-plugin-6f86647c68-r4plh |
AddedInterface |
Add eth0 [10.128.0.94/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
monitoring-plugin-6f86647c68-r4plh |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaa92509b71c898caed43ac2b5d3b3fc44fff333855789eb1d7df15f08e91ea3" | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-jnzwg |
Started |
Started container reload | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-jnzwg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9899b0f08b1202d149e16f09616ee7b8f37e3cda642386d93a6d3f63d72a316b" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-jnzwg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-jnzwg |
Created |
Created container: reload | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-jnzwg |
Started |
Started container reload | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-jnzwg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-jnzwg |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-jnzwg |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-jnzwg |
Created |
Created container: telemeter-client | |
openshift-monitoring |
kubelet |
telemeter-client-7fbdcd9689-jnzwg |
Started |
Started container telemeter-client | |
openshift-monitoring |
kubelet |
monitoring-plugin-6f86647c68-r4plh |
Started |
Started container monitoring-plugin | |
openshift-monitoring |
kubelet |
monitoring-plugin-6f86647c68-r4plh |
Created |
Created container: monitoring-plugin | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-c5mq6 |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-monitoring |
kubelet |
monitoring-plugin-6f86647c68-r4plh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaa92509b71c898caed43ac2b5d3b3fc44fff333855789eb1d7df15f08e91ea3" in 1.413s (1.413s including waiting). Image size: 442636622 bytes. | |
openshift-monitoring |
kubelet |
monitoring-plugin-6f86647c68-r4plh |
Created |
Created container: monitoring-plugin | |
openshift-monitoring |
kubelet |
monitoring-plugin-6f86647c68-r4plh |
Started |
Started container monitoring-plugin | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-c5mq6 |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-monitoring |
kubelet |
monitoring-plugin-6f86647c68-r4plh |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaa92509b71c898caed43ac2b5d3b3fc44fff333855789eb1d7df15f08e91ea3" in 1.413s (1.413s including waiting). Image size: 442636622 bytes. | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
| (x7) | openshift-console-operator |
kubelet |
console-operator-7777d5cc66-w62mx |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : configmap references non-existent config key: ca-bundle.crt |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_7fb04e80-726f-4067-8cfe-86acf9311d20 became leader | |
openshift-multus |
replicaset-controller |
multus-admission-controller-bb4ff5654 |
SuccessfulCreate |
Created pod: multus-admission-controller-bb4ff5654-mmnxt | |
openshift-multus |
replicaset-controller |
multus-admission-controller-bb4ff5654 |
SuccessfulCreate |
Created pod: multus-admission-controller-bb4ff5654-mmnxt | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-bb4ff5654 to 1 | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-bb4ff5654 to 1 | |
openshift-multus |
kubelet |
multus-admission-controller-bb4ff5654-mmnxt |
Created |
Created container: multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-bb4ff5654-mmnxt |
Created |
Created container: multus-admission-controller | |
openshift-multus |
multus |
multus-admission-controller-bb4ff5654-mmnxt |
AddedInterface |
Add eth0 [10.128.0.96/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-bb4ff5654-mmnxt |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-bb4ff5654-mmnxt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-bb4ff5654-mmnxt |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-bb4ff5654-mmnxt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-bb4ff5654-mmnxt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-multus |
multus |
multus-admission-controller-bb4ff5654-mmnxt |
AddedInterface |
Add eth0 [10.128.0.96/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-bb4ff5654-mmnxt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-bb4ff5654-mmnxt |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-bb4ff5654-mmnxt |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-bb4ff5654-mmnxt |
Created |
Created container: kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-bb4ff5654-mmnxt |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-6d678b8d67 to 0 from 1 | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-rzbff |
Killing |
Stopping container multus-admission-controller | |
openshift-multus |
replicaset-controller |
multus-admission-controller-6d678b8d67 |
SuccessfulDelete |
Deleted pod: multus-admission-controller-6d678b8d67-rzbff | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-rzbff |
Killing |
Stopping container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-rzbff |
Killing |
Stopping container multus-admission-controller | |
openshift-multus |
replicaset-controller |
multus-admission-controller-6d678b8d67 |
SuccessfulDelete |
Deleted pod: multus-admission-controller-6d678b8d67-rzbff | |
openshift-multus |
kubelet |
multus-admission-controller-6d678b8d67-rzbff |
Killing |
Stopping container kube-rbac-proxy | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-6d678b8d67 to 0 from 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-78ff47c7c5-xvzq9_3443ed67-8cd9-4285-be99-c1d9fb7afafc became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
installer errors: installer: ionalCertSecretNamePrefixes: ([]string) <nil>, CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=66) "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0217 15:15:18.390634 1 cmd.go:413] Getting controller reference for node master-0 I0217 15:15:18.487744 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0217 15:15:18.487901 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0217 15:15:18.487928 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0217 15:15:18.492607 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting I0217 15:15:28.498671 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting W0217 15:15:38.495189 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused I0217 15:15:48.508047 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0217 15:16:18.508277 1 cmd.go:524] Getting installer pods for node master-0 F0217 15:16:32.512683 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerOK |
found expected kube-apiserver endpoints | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: ionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0217 15:15:18.390634 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0217 15:15:18.487744 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0217 15:15:18.487901 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0217 15:15:18.487928 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0217 15:15:18.492607 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0217 15:15:28.498671 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0217 15:15:38.495189 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: I0217 15:15:48.508047 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0217 15:16:18.508277 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0217 15:16:32.512683 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/sa-token-signing-certs has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-network-console |
replicaset-controller |
networking-console-plugin-bd6d6f87f |
SuccessfulCreate |
Created pod: networking-console-plugin-bd6d6f87f-72mnn | |
openshift-network-console |
deployment-controller |
networking-console-plugin |
ScalingReplicaSet |
Scaled up replica set networking-console-plugin-bd6d6f87f to 1 | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-console namespace | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-5f5f84757d-dsfkk_4810f01f-465f-45b3-8801-2faff4ab47c2 became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e45a7281a6"...)}},   "controllers": []any{   ... // 8 identical elements   string("openshift.io/deploymentconfig"),   string("openshift.io/image-import"),   strings.Join({ + "-",   "openshift.io/image-puller-rolebindings",   }, ""),   string("openshift.io/image-signature-import"),   string("openshift.io/image-trigger"),   ... // 2 identical elements   string("openshift.io/origin-namespace"),   string("openshift.io/serviceaccount"),   strings.Join({ + "-",   "openshift.io/serviceaccount-pull-secrets",   }, ""),   string("openshift.io/templateinstance"),   string("openshift.io/templateinstancefinalizer"),   string("openshift.io/unidling"),   },   "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:45836e9b83"...)}},   "featureGates": []any{string("BuildCSIVolumes=true")},   "ingress": map[string]any{"ingressIPNetworkCIDR": string("")},   } | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-f6b44f49 to 1 from 0 | |
openshift-controller-manager |
kubelet |
controller-manager-b9c8fdfbc-rh9v2 |
Killing |
Stopping container controller-manager | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-6978b88779 to 0 from 1 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-68f4c9ccfc |
SuccessfulCreate |
Created pod: route-controller-manager-68f4c9ccfc-vg949 | |
openshift-network-console |
kubelet |
networking-console-plugin-bd6d6f87f-72mnn |
FailedMount |
MountVolume.SetUp failed for volume "networking-console-plugin-cert" : secret "networking-console-plugin-cert" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-b9c8fdfbc to 0 from 1 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-f6b44f49 |
SuccessfulCreate |
Created pod: controller-manager-f6b44f49-s25nf | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6978b88779 |
SuccessfulDelete |
Deleted pod: route-controller-manager-6978b88779-vp5tv | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6978b88779-vp5tv |
Killing |
Stopping container route-controller-manager | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed | |
openshift-controller-manager |
replicaset-controller |
controller-manager-b9c8fdfbc |
SuccessfulDelete |
Deleted pod: controller-manager-b9c8fdfbc-rh9v2 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-68f4c9ccfc to 1 from 0 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed | |
openshift-network-console |
kubelet |
networking-console-plugin-bd6d6f87f-72mnn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a913cef121c9a6c3ddc57b01fc807bb042e5a903489c05f99e6e2da9e6ec0b98" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-network-console |
multus |
networking-console-plugin-bd6d6f87f-72mnn |
AddedInterface |
Add eth0 [10.128.0.97/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3.") | |
openshift-network-console |
kubelet |
networking-console-plugin-bd6d6f87f-72mnn |
Created |
Created container: networking-console-plugin | |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-c5mq6 |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-c5mq6 |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
openshift-network-console |
kubelet |
networking-console-plugin-bd6d6f87f-72mnn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a913cef121c9a6c3ddc57b01fc807bb042e5a903489c05f99e6e2da9e6ec0b98" in 1.532s (1.532s including waiting). Image size: 441507672 bytes. | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-68f4c9ccfc-vg949_47a304de-3d3a-4dfe-a770-c28b1bdb72c0 became leader | |
openshift-network-console |
kubelet |
networking-console-plugin-bd6d6f87f-72mnn |
Started |
Started container networking-console-plugin | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-68f4c9ccfc-vg949 |
Started |
Started container route-controller-manager | |
openshift-controller-manager |
multus |
controller-manager-f6b44f49-s25nf |
AddedInterface |
Add eth0 [10.128.0.98/23] from ovn-kubernetes | |
openshift-route-controller-manager |
multus |
route-controller-manager-68f4c9ccfc-vg949 |
AddedInterface |
Add eth0 [10.128.0.99/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available changed from True to False ("Available: no pods available on any node.") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-68f4c9ccfc-vg949 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38" already present on machine | |
openshift-controller-manager |
kubelet |
controller-manager-f6b44f49-s25nf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee" already present on machine | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-68f4c9ccfc-vg949 |
Created |
Created container: route-controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-f6b44f49-s25nf |
Created |
Created container: controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-f6b44f49-s25nf |
Started |
Started container controller-manager | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-f6b44f49-s25nf became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-7b87b97578-9fpgj_2d4f6c65-1590-44e8-bf98-91e9e6552d28 became leader | |
openshift-kube-controller-manager |
kubelet |
installer-4-retry-1-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-4-retry-1-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager |
multus |
installer-4-retry-1-master-0 |
AddedInterface |
Add eth0 [10.128.0.100/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-4-retry-1-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-4-retry-1-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
multus |
console-operator-7777d5cc66-w62mx |
AddedInterface |
Add eth0 [10.128.0.92/23] from ovn-kubernetes | |
openshift-console-operator |
kubelet |
console-operator-7777d5cc66-w62mx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95e7647e6fda21b94b692f03908e4cd154e3374fca0560229c646fefe2c46730" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/sa-token-signing-certs has changed" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-console-operator |
kubelet |
console-operator-7777d5cc66-w62mx |
Started |
Started container console-operator | |
openshift-console-operator |
kubelet |
console-operator-7777d5cc66-w62mx |
Created |
Created container: console-operator | |
openshift-console-operator |
kubelet |
console-operator-7777d5cc66-w62mx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95e7647e6fda21b94b692f03908e4cd154e3374fca0560229c646fefe2c46730" in 2.282s (2.282s including waiting). Image size: 507065596 bytes. | |
openshift-console-operator |
console-operator-console-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/console -n openshift-console because it was missing | |
openshift-console |
replicaset-controller |
downloads-dcd7b7d95 |
SuccessfulCreate |
Created pod: downloads-dcd7b7d95-vtnfs | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded set to False ("All is well"),Progressing set to False ("All is well"),Available set to Unknown (""),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.18.32"}] | |
openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentCreated |
Created Deployment.apps/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorVersionChanged |
clusteroperator/console version "operator" changed from "" to "4.18.32" | |
openshift-console-operator |
console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-health-check-controller-healthcheckcontroller |
console-operator |
FastControllerResync |
Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest | |
openshift-console-operator |
console-operator |
console-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-console-operator |
console-operator |
console-operator-lock |
LeaderElection |
console-operator-7777d5cc66-w62mx_6813d40f-e3d8-4173-8b79-b0b1d553ca4c became leader | |
| (x2) | openshift-console |
controllermanager |
console |
NoPods |
No matching pods found |
openshift-console |
deployment-controller |
downloads |
ScalingReplicaSet |
Scaled up replica set downloads-dcd7b7d95 to 1 | |
openshift-console |
controllermanager |
downloads |
NoPods |
No matching pods found | |
openshift-console |
kubelet |
downloads-dcd7b7d95-vtnfs |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7fccb6e19eb4caa16d32f4cf59670c2c741c98b099d1f12368b85aab3f84dc38" | |
openshift-console |
multus |
downloads-dcd7b7d95-vtnfs |
AddedInterface |
Add eth0 [10.128.0.101/23] from ovn-kubernetes | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" | |
openshift-console-operator |
console-operator-oauthclient-secret-controller-oauthclientsecretcontroller |
console-operator |
SecretCreated |
Created Secret/console-oauth-config -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/console -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-config -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/default-ingress-cert -n openshift-console because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-4-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}} | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentCreated |
Created Deployment.apps/console -n openshift-console because it was missing | |
openshift-console |
replicaset-controller |
console-98f66b5dc |
SuccessfulCreate |
Created pod: console-98f66b5dc-p2gxf | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-public -n openshift-config-managed because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Created |
Created container: installer | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-75b869db96-qbmw5_f95d5a62-aaca-4070-9968-7d4475115e5d became leader | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-4-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-98f66b5dc to 1 | |
openshift-kube-apiserver |
multus |
installer-4-master-0 |
AddedInterface |
Add eth0 [10.128.0.102/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-98f66b5dc-p2gxf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found" | |
openshift-console |
multus |
console-98f66b5dc-p2gxf |
AddedInterface |
Add eth0 [10.128.0.103/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-console |
kubelet |
console-98f66b5dc-p2gxf |
Started |
Started container console | |
openshift-console |
kubelet |
console-98f66b5dc-p2gxf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" in 3.578s (3.578s including waiting). Image size: 628694305 bytes. | |
openshift-console |
kubelet |
console-98f66b5dc-p2gxf |
Created |
Created container: console | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-86d4dfb9dd to 1 | |
openshift-console |
replicaset-controller |
console-86d4dfb9dd |
SuccessfulCreate |
Created pod: console-86d4dfb9dd-rz6cj | |
openshift-console |
kubelet |
console-86d4dfb9dd-rz6cj |
Created |
Created container: console | |
openshift-console |
kubelet |
console-86d4dfb9dd-rz6cj |
Started |
Started container console | |
openshift-console |
multus |
console-86d4dfb9dd-rz6cj |
AddedInterface |
Add eth0 [10.128.0.104/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-86d4dfb9dd-rz6cj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" already present on machine | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-755d954778-jrdqm_e15e299b-84fc-4f6e-960e-15b380e2622d became leader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.41:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.41:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.41:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.41:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: endpoints for service/api in \"openshift-oauth-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: endpoints for service/api in \"openshift-oauth-apiserver\" have no addresses with port name \"https\"\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: endpoints for service/api in \"openshift-oauth-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: endpoints for service/api in \"openshift-oauth-apiserver\" have no addresses with port name \"https\"\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available changed from Unknown to False ("DeploymentAvailable: 0 replicas available for console deployment") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.32, 0 replicas available" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True (""),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
| (x4) | openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/downloads -n openshift-console because it changed |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on authentications.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: " map[string]any{\n \t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\"oauthConfig\": map[string]any{\n- \t\t\"assetPublicURL\": string(\"\"),\n+ \t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.sno.openstack.lab\"),\n \t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n \t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n \t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n \t},\n \t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n \t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.sno.openstack.lab\")}}}},\n \t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n }\n" |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveConsoleURL |
assetPublicURL changed from to https://console-openshift-console.apps.sno.openstack.lab |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'" | |
openshift-kube-controller-manager |
static-pod-installer |
installer-4-retry-1-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
openshift-authentication-operator |
cluster-authentication-operator-metadata-controller-openshift-authentication-metadata |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 5 triggered by "required configmap/config has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}},   "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, ...}, + "authConfig": map[string]any{ + "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), + },   "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")},   "gracefulTerminationDuration": string("15"),   ... // 2 identical entries   } | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-5 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-865765995-c58rq pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)",Progressing message changed from "" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nOAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-5 -n openshift-kube-apiserver because it was missing | |
openshift-console |
kubelet |
downloads-dcd7b7d95-vtnfs |
Created |
Created container: download-server | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
openshift-console |
kubelet |
downloads-dcd7b7d95-vtnfs |
Started |
Started container download-server | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-console |
kubelet |
downloads-dcd7b7d95-vtnfs |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7fccb6e19eb4caa16d32f4cf59670c2c741c98b099d1f12368b85aab3f84dc38" in 33.12s (33.12s including waiting). Image size: 2890715256 bytes. | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing | |
| (x2) | openshift-console |
kubelet |
downloads-dcd7b7d95-vtnfs |
ProbeError |
Readiness probe error: Get "http://10.128.0.101:8080/": dial tcp 10.128.0.101:8080: connect: connection refused body: |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-console |
kubelet |
downloads-dcd7b7d95-vtnfs |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.101:8080/": dial tcp 10.128.0.101:8080: connect: connection refused |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: ionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0217 15:15:18.390634 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0217 15:15:18.487744 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0217 15:15:18.487901 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0217 15:15:18.487928 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0217 15:15:18.492607 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0217 15:15:28.498671 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0217 15:15:38.495189 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: I0217 15:15:48.508047 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0217 15:16:18.508277 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0217 15:16:32.512683 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: ionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0217 15:15:18.390634 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0217 15:15:18.487744 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0217 15:15:18.487901 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0217 15:15:18.487928 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0217 15:15:18.492607 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0217 15:15:28.498671 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0217 15:15:38.495189 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: I0217 15:15:48.508047 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0217 15:16:18.508277 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0217 15:16:32.512683 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.90/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_2d77953c-9757-4d13-9e18-f80b99ca7146 became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.90/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.91/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.91/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22dd40cd10354e3512d2065a8dd8c9dcb995ea487c0f661f172c527509123fc" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-5 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22dd40cd10354e3512d2065a8dd8c9dcb995ea487c0f661f172c527509123fc" | |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_7272b91f-0dca-4c0e-a769-9b8ed6b6913b became leader | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: ionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0217 15:15:18.390634 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0217 15:15:18.487744 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0217 15:15:18.487901 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0217 15:15:18.487928 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0217 15:15:18.492607 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0217 15:15:28.498671 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0217 15:15:38.495189 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: I0217 15:15:48.508047 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0217 15:16:18.508277 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0217 15:16:32.512683 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: ionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0217 15:15:18.390634 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0217 15:15:18.487744 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0217 15:15:18.487901 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0217 15:15:18.487928 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0217 15:15:18.492607 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0217 15:15:28.498671 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0217 15:15:38.495189 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: I0217 15:15:48.508047 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0217 15:16:18.508277 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0217 15:16:32.512683 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-kube-apiserver |
static-pod-installer |
installer-4-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigControllerFailed |
Failed to resync 4.18.32 because: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/kubeconfig-data": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreateFailed |
Failed to create Secret/localhost-recovery-serving-certkey-5 -n openshift-kube-apiserver: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
Failed to create installer pod for revision 4 count 1 on node "master-0": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-4-retry-1-master-0": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
Failed to create installer pod for revision 4 count 1 on node "master-0": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22dd40cd10354e3512d2065a8dd8c9dcb995ea487c0f661f172c527509123fc" in 10.319s (10.319s including waiting). Image size: 462365110 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22dd40cd10354e3512d2065a8dd8c9dcb995ea487c0f661f172c527509123fc" in 10.319s (10.319s including waiting). Image size: 462365110 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_9aa5feee-2670-4163-b078-4e06950ef740 became leader | |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
EtcdEndpointsErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-5cdd6dbfff to 1 | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_4ee7208d-39fe-425b-bc10-5578a9c84f91 became leader | |
openshift-authentication |
replicaset-controller |
oauth-openshift-5cdd6dbfff |
SuccessfulCreate |
Created pod: oauth-openshift-5cdd6dbfff-tvzt9 | |
| (x13) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigPoolsFailed |
Failed to resync 4.18.32 because: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 5 triggered by "required configmap/config has changed,optional configmap/oauth-metadata has been created" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 2.",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"16855\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 15, 35, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0026b9908), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
| (x10) | openshift-console |
kubelet |
console-98f66b5dc-p2gxf |
Unhealthy |
Startup probe failed: Get "https://10.128.0.103:8443/health": dial tcp 10.128.0.103:8443: connect: connection refused |
openshift-authentication |
multus |
oauth-openshift-5cdd6dbfff-tvzt9 |
AddedInterface |
Add eth0 [10.128.0.105/23] from ovn-kubernetes | |
openshift-authentication |
kubelet |
oauth-openshift-5cdd6dbfff-tvzt9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2969828f1fcae82b7ef16d3588046ace3cf51b9ea578658c42475386e0ee1fc7" | |
| (x11) | openshift-console |
kubelet |
console-98f66b5dc-p2gxf |
ProbeError |
Startup probe error: Get "https://10.128.0.103:8443/health": dial tcp 10.128.0.103:8443: connect: connection refused body: |
openshift-authentication |
kubelet |
oauth-openshift-5cdd6dbfff-tvzt9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2969828f1fcae82b7ef16d3588046ace3cf51b9ea578658c42475386e0ee1fc7" in 2.666s (2.666s including waiting). Image size: 476284775 bytes. | |
openshift-authentication |
kubelet |
oauth-openshift-5cdd6dbfff-tvzt9 |
Started |
Started container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-5cdd6dbfff-tvzt9 |
Created |
Created container: oauth-openshift | |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-wqxmh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a" already present on machine |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-wqxmh |
Started |
Started container marketplace-operator |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-6cc5b65c6b-wqxmh |
Created |
Created container: marketplace-operator |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: ionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0217 15:15:18.390634 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0217 15:15:18.487744 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0217 15:15:18.487901 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0217 15:15:18.487928 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0217 15:15:18.492607 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: I0217 15:15:28.498671 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W0217 15:15:38.495189 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: I0217 15:15:48.508047 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0217 15:16:18.508277 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0217 15:16:32.512683 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: ") | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded changed from False to True ("RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'") | |
| (x11) | openshift-console |
kubelet |
console-86d4dfb9dd-rz6cj |
ProbeError |
Startup probe error: Get "https://10.128.0.104:8443/health": dial tcp 10.128.0.104:8443: connect: connection refused body: |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
| (x11) | openshift-console |
kubelet |
console-86d4dfb9dd-rz6cj |
Unhealthy |
Startup probe failed: Get "https://10.128.0.104:8443/health": dial tcp 10.128.0.104:8443: connect: connection refused |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 2 to 4 because static pod is ready | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x9) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreateFailed |
Failed to create Secret/localhost-recovery-serving-certkey-5 -n openshift-kube-apiserver: secrets "localhost-recovery-serving-certkey-5" already exists |
| (x24) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 5 triggered by "required configmap/config has changed,optional configmap/oauth-metadata has been created" |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container kube-rbac-proxy-metric | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulDelete |
delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container kube-rbac-proxy-metric | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulDelete |
delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.106/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.106/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22dd40cd10354e3512d2065a8dd8c9dcb995ea487c0f661f172c527509123fc" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22dd40cd10354e3512d2065a8dd8c9dcb995ea487c0f661f172c527509123fc" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container: prom-label-proxy | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-56d478877c to 1 from 0 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-5cdd6dbfff to 0 from 1 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-5cdd6dbfff |
SuccessfulDelete |
Deleted pod: oauth-openshift-5cdd6dbfff-tvzt9 | |
openshift-authentication |
kubelet |
oauth-openshift-5cdd6dbfff-tvzt9 |
Killing |
Stopping container oauth-openshift | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-98f66b5dc to 0 from 1 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-55495f9f9c to 1 from 0 | |
openshift-console |
replicaset-controller |
console-55495f9f9c |
SuccessfulCreate |
Created pod: console-55495f9f9c-p58l5 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-56d478877c |
SuccessfulCreate |
Created pod: oauth-openshift-56d478877c-mlr8b | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdateFailed |
Failed to update Deployment.apps/console -n openshift-console: Operation cannot be fulfilled on deployments.apps "console": the object has been modified; please apply your changes to the latest version and try again | |
openshift-console |
replicaset-controller |
console-98f66b5dc |
SuccessfulDelete |
Deleted pod: console-98f66b5dc-p2gxf | |
openshift-console |
multus |
console-55495f9f9c-p58l5 |
AddedInterface |
Add eth0 [10.128.0.107/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-55495f9f9c-p58l5 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" already present on machine | |
openshift-console |
kubelet |
console-55495f9f9c-p58l5 |
Created |
Created container: console | |
openshift-console |
kubelet |
console-55495f9f9c-p58l5 |
Started |
Started container console | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "All is well" to "KubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/serviceaccount.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeStorageVersionMigratorStaticResourcesDegraded: " | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-6f45cc898f to 1 from 0 | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulDelete |
delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 6 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulDelete |
delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-console |
replicaset-controller |
console-6f45cc898f |
SuccessfulCreate |
Created pod: console-6f45cc898f-z9tb2 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-86d4dfb9dd to 0 from 1 | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container prometheus | |
openshift-console |
replicaset-controller |
console-86d4dfb9dd |
SuccessfulDelete |
Deleted pod: console-86d4dfb9dd-rz6cj | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Killing |
Stopping container prometheus | |
openshift-console |
multus |
console-6f45cc898f-z9tb2 |
AddedInterface |
Add eth0 [10.128.0.108/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-6f45cc898f-z9tb2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-6 -n openshift-kube-scheduler because it was missing | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-console |
kubelet |
console-6f45cc898f-z9tb2 |
Started |
Started container console | |
openshift-console |
kubelet |
console-6f45cc898f-z9tb2 |
Created |
Created container: console | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-scheduler because it was missing | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.109/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-6 -n openshift-kube-scheduler because it was missing | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.109/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-6 -n openshift-kube-scheduler because it was missing | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready"),Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e4d0e747f55d3f773a63180bc4e4820ee5f17efbd45eb1dac9167fbc7520650e" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-6 -n openshift-kube-scheduler because it was missing | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e4d0e747f55d3f773a63180bc4e4820ee5f17efbd45eb1dac9167fbc7520650e" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-6 -n openshift-kube-scheduler because it was missing | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container: kube-rbac-proxy | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 6 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 6 triggered by "secret \"localhost-recovery-serving-certkey-5\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: secrets \"localhost-recovery-serving-certkey-5\" already exists" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: secrets \"localhost-recovery-serving-certkey-5\" already exists" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: secrets \"localhost-recovery-serving-certkey-5\" already exists\nInstallerControllerDegraded: missing required resources: secrets: localhost-recovery-client-token-5,localhost-recovery-serving-certkey-5",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-5 -n openshift-kube-controller-manager because it was missing | |
| (x12) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
configmaps: config-6,kube-scheduler-cert-syncer-kubeconfig-6,kube-scheduler-pod-6,scheduler-kubeconfig-6,serviceaccount-ca-6 |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "KubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/serviceaccount.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeStorageVersionMigratorStaticResourcesDegraded: " to "All is well" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"revision-status-6\" not found\nInstallerControllerDegraded: missing required resources: configmaps: config-6,kube-scheduler-cert-syncer-kubeconfig-6,kube-scheduler-pod-6,scheduler-kubeconfig-6,serviceaccount-ca-6",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 5; 0 nodes have achieved new revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5; 0 nodes have achieved new revision 6" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-5 -n openshift-kube-controller-manager because it was missing | |
| (x7) | openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"16855\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 15, 35, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0026b9908), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)",Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 2." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"16855\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 15, 35, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0026b9908), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"16855\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 15, 35, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0026b9908), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"16855\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 15, 35, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0026b9908), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-5 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/10-clusterrole-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/11-clusterrole-operator-controller-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/12-clusterrole-operator-controller-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/13-rolebinding-openshift-config-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/14-rolebinding-openshift-operator-controller-operator-controller-leader-election-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-leader-election-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/15-rolebinding-openshift-operator-controller-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/16-clusterrolebinding-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-56d478877c-mlr8b |
Created |
Created container: oauth-openshift | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-authentication |
kubelet |
oauth-openshift-56d478877c-mlr8b |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2969828f1fcae82b7ef16d3588046ace3cf51b9ea578658c42475386e0ee1fc7" already present on machine | |
openshift-authentication |
multus |
oauth-openshift-56d478877c-mlr8b |
AddedInterface |
Add eth0 [10.128.0.110/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-56d478877c-mlr8b |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.110:6443/healthz": dial tcp 10.128.0.110:6443: connect: connection refused | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-authentication |
kubelet |
oauth-openshift-56d478877c-mlr8b |
Started |
Started container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-56d478877c-mlr8b |
ProbeError |
Readiness probe error: Get "https://10.128.0.110:6443/healthz": dial tcp 10.128.0.110:6443: connect: connection refused body: | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"16855\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 15, 35, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0026b9908), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"16855\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 15, 35, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0026b9908), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"16855\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 15, 35, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0026b9908), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"16855\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 15, 35, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0026b9908), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: configmaps: config-6,kube-scheduler-cert-syncer-kubeconfig-6,kube-scheduler-pod-6,scheduler-kubeconfig-6,serviceaccount-ca-6" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 5 to 6 because node master-0 with revision 5 is the oldest | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"revision-status-6\" not found\nInstallerControllerDegraded: missing required resources: configmaps: config-6,kube-scheduler-cert-syncer-kubeconfig-6,kube-scheduler-pod-6,scheduler-kubeconfig-6,serviceaccount-ca-6" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: configmaps: config-6,kube-scheduler-cert-syncer-kubeconfig-6,kube-scheduler-pod-6,scheduler-kubeconfig-6,serviceaccount-ca-6" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from True to False ("OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"16855\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 15, 35, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0026b9908), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"16855\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 15, 35, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0026b9908), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"16855\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 15, 35, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0026b9908), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"16855\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 15, 35, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0026b9908), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.180.229:443/healthz\": dial tcp 172.30.180.229:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"16855\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 15, 35, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0026b9908), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/10-clusterrole-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/11-clusterrole-operator-controller-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/12-clusterrole-operator-controller-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/13-rolebinding-openshift-config-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/14-rolebinding-openshift-operator-controller-operator-controller-leader-election-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-leader-election-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/15-rolebinding-openshift-operator-controller-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/rolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/16-clusterrolebinding-operator-controller-manager-rolebinding.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/operator-controller-manager-rolebinding\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "All is well" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 4 to 5 because node master-0 with revision 4 is the oldest | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-5-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-6-master-0 -n openshift-kube-scheduler because it was missing | |
| (x20) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
secrets: localhost-recovery-client-token-5,localhost-recovery-serving-certkey-5 |
openshift-kube-scheduler |
multus |
installer-6-master-0 |
AddedInterface |
Add eth0 [10.128.0.111/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-5-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-scheduler |
kubelet |
installer-6-master-0 |
Created |
Created container: installer | |
openshift-kube-scheduler |
kubelet |
installer-6-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-6-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
multus |
installer-5-master-0 |
AddedInterface |
Add eth0 [10.128.0.112/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"16855\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 15, 35, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0026b9908), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"16855\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 15, 35, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0026b9908), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-5-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
kubelet |
installer-5-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-6 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-openshift" changed from "" to "4.18.32_openshift" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: secrets \"localhost-recovery-serving-certkey-5\" already exists\nInstallerControllerDegraded: missing required resources: secrets: localhost-recovery-client-token-5,localhost-recovery-serving-certkey-5" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: secrets \"localhost-recovery-serving-certkey-5\" already exists" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 4 to 5 because node master-0 with revision 4 is the oldest | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.32"} {"oauth-apiserver" "4.18.32"}] to [{"operator" "4.18.32"} {"oauth-apiserver" "4.18.32"} {"oauth-openshift" "4.18.32_openshift"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-5-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 6 triggered by "secret \"localhost-recovery-serving-certkey-5\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: secrets \"localhost-recovery-serving-certkey-5\" already exists" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
multus |
installer-5-master-0 |
AddedInterface |
Add eth0 [10.128.0.113/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-5-master-0 |
Killing |
Stopping container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 6" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-6-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver |
multus |
installer-6-master-0 |
AddedInterface |
Add eth0 [10.128.0.114/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Created |
Created container: installer | |
openshift-kube-apiserver |
kubelet |
installer-6-master-0 |
Started |
Started container installer | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"16855\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 15, 35, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc0026b9908), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "All is well" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
static-pod-installer |
installer-6-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Killing |
Stopping container kube-scheduler-cert-syncer | |
openshift-kube-controller-manager |
static-pod-installer |
installer-5-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Created |
Created container: kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_f220ef53-e2bc-4a2f-a939-48b1a1311182 became leader | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master-0_1f4d5363-8569-4242-94b3-fc96655bd7e3 became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_e5d991a7-e225-480a-bb66-2c4ecc180cc4 became leader | |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_62b7ae60-57d1-43bf-b0e3-67e104fced22 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 4 to 5 because static pod is ready | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_56c9c0e1-9337-400f-a777-2396cf555cf5 became leader | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
| (x11) | openshift-console |
kubelet |
console-55495f9f9c-p58l5 |
ProbeError |
Startup probe error: Get "https://10.128.0.107:8443/health": dial tcp 10.128.0.107:8443: connect: connection refused body: |
| (x11) | openshift-console |
kubelet |
console-55495f9f9c-p58l5 |
Unhealthy |
Startup probe failed: Get "https://10.128.0.107:8443/health": dial tcp 10.128.0.107:8443: connect: connection refused |
| (x11) | openshift-console |
kubelet |
console-6f45cc898f-z9tb2 |
ProbeError |
Startup probe error: Get "https://10.128.0.108:8443/health": dial tcp 10.128.0.108:8443: connect: connection refused body: |
| (x11) | openshift-console |
kubelet |
console-6f45cc898f-z9tb2 |
Unhealthy |
Startup probe failed: Get "https://10.128.0.108:8443/health": dial tcp 10.128.0.108:8443: connect: connection refused |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Started |
Started container startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Created |
Created container: startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container setup | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master-0 |
Created |
Created container: kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master-0_ee372e62-ad29-43f4-9bc5-8b76bc38481d became leader | |
| (x13) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdateFailed |
Failed to update Secret/service-account-private-key -n openshift-kube-controller-manager: Put "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/service-account-private-key": dial tcp 172.30.0.1:443: connect: connection refused |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_4c352be9-3761-41be-bcd8-6421bf64cb9b became leader | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "",Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"18300\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 32, 5, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003945650), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master-0 |
Killing |
Stopping container startup-monitor | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-55495f9f9c to 0 from 1 | |
openshift-console |
replicaset-controller |
console-55495f9f9c |
SuccessfulDelete |
Deleted pod: console-55495f9f9c-p58l5 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdated |
Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/06-clusterrole-catalogd-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/07-clusterrole-catalogd-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"18300\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 32, 5, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003945650), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"178fc17d-5409-4548-8e2d-fe8d8fdff7de\", ResourceVersion:\"18300\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 17, 14, 55, 6, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 17, 15, 32, 5, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003945650), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "All is well" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: Operation cannot be fulfilled on secrets \"service-account-private-key\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 5 to 6 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded changed from True to False ("PDBSyncDegraded: Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console\": dial tcp 172.30.0.1:443: connect: connection refused\nServiceSyncDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-console/services/downloads\": dial tcp 172.30.0.1:443: connect: connection refused\nDownloadsCustomRouteSyncDegraded: Delete \"https://172.30.0.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes/downloads-custom\": dial tcp 172.30.0.1:443: connect: connection refused\nDownloadsDeploymentSyncDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads\": dial tcp 172.30.0.1:443: connect: connection refused"),Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well"),Upgradeable changed from True to False ("DownloadsCustomRouteSyncUpgradeable: Delete \"https://172.30.0.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes/downloads-custom\": dial tcp 172.30.0.1:443: connect: connection refused") | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "PDBSyncDegraded: Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console\": dial tcp 172.30.0.1:443: connect: connection refused\nDownloadsCustomRouteSyncDegraded: Delete \"https://172.30.0.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes/downloads-custom\": dial tcp 172.30.0.1:443: connect: connection refused\nDownloadsDeploymentSyncDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads\": dial tcp 172.30.0.1:443: connect: connection refused" to "DownloadsCustomRouteSyncDegraded: Delete \"https://172.30.0.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes/downloads-custom\": dial tcp 172.30.0.1:443: connect: connection refused\nDownloadsDeploymentSyncDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads\": dial tcp 172.30.0.1:443: connect: connection refused" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "DownloadsCustomRouteSyncDegraded: Delete \"https://172.30.0.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes/downloads-custom\": dial tcp 172.30.0.1:443: connect: connection refused\nDownloadsDeploymentSyncDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads\": dial tcp 172.30.0.1:443: connect: connection refused" to "DownloadsCustomRouteSyncDegraded: Delete \"https://172.30.0.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes/downloads-custom\": dial tcp 172.30.0.1:443: connect: connection refused" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "PDBSyncDegraded: Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console\": dial tcp 172.30.0.1:443: connect: connection refused\nServiceSyncDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-console/services/downloads\": dial tcp 172.30.0.1:443: connect: connection refused\nDownloadsCustomRouteSyncDegraded: Delete \"https://172.30.0.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes/downloads-custom\": dial tcp 172.30.0.1:443: connect: connection refused\nDownloadsDeploymentSyncDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads\": dial tcp 172.30.0.1:443: connect: connection refused" to "PDBSyncDegraded: Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-console/poddisruptionbudgets/console\": dial tcp 172.30.0.1:443: connect: connection refused\nDownloadsCustomRouteSyncDegraded: Delete \"https://172.30.0.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes/downloads-custom\": dial tcp 172.30.0.1:443: connect: connection refused\nDownloadsDeploymentSyncDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-console/deployments/downloads\": dial tcp 172.30.0.1:443: connect: connection refused" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "DownloadsCustomRouteSyncDegraded: Delete \"https://172.30.0.1:443/apis/route.openshift.io/v1/namespaces/openshift-console/routes/downloads-custom\": dial tcp 172.30.0.1:443: connect: connection refused" to "All is well",Upgradeable changed from False to True ("All is well") | |
openshift-cluster-olm-operator |
olm-status-controller-statussyncer_olm |
cluster-olm-operator |
OperatorStatusChanged |
Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/06-clusterrole-catalogd-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/07-clusterrole-catalogd-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "All is well" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-6 -n openshift-kube-controller-manager because it was missing | |
| (x22) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdateFailed |
Failed to update Secret/service-account-private-key -n openshift-kube-controller-manager: Operation cannot be fulfilled on secrets "service-account-private-key": the object has been modified; please apply your changes to the latest version and try again |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 6 triggered by "required secret/service-account-private-key has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: Operation cannot be fulfilled on secrets \"service-account-private-key\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler |
multus |
revision-pruner-6-master-0 |
AddedInterface |
Add eth0 [10.128.0.115/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-prunecontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/revision-pruner-6-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6" | |
openshift-apiserver-operator |
openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
openshift-apiserver-operator |
CustomResourceDefinitionCreateFailed |
Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
kube-apiserver-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-master-0 |
Created |
Created container: pruner | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-master-0 |
Started |
Started container pruner | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 6 triggered by "required secret/service-account-private-key has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 5; 0 nodes have achieved new revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5; 0 nodes have achieved new revision 6" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master-0" from revision 5 to 6 because node master-0 with revision 5 is the oldest | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-6-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-6-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-controller-manager |
multus |
installer-6-master-0 |
AddedInterface |
Add eth0 [10.128.0.116/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-6-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-6-master-0 |
Created |
Created container: installer | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for sushy-emulator namespace | |
sushy-emulator |
replicaset-controller |
sushy-emulator-58f4c9b998 |
SuccessfulCreate |
Created pod: sushy-emulator-58f4c9b998-jd8tg | |
sushy-emulator |
deployment-controller |
sushy-emulator |
ScalingReplicaSet |
Scaled up replica set sushy-emulator-58f4c9b998 to 1 | |
sushy-emulator |
kubelet |
sushy-emulator-58f4c9b998-jd8tg |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1761151453" | |
sushy-emulator |
multus |
sushy-emulator-58f4c9b998-jd8tg |
AddedInterface |
Add eth0 [10.128.0.117/23] from ovn-kubernetes | |
sushy-emulator |
kubelet |
sushy-emulator-58f4c9b998-jd8tg |
Started |
Started container sushy-emulator | |
sushy-emulator |
kubelet |
sushy-emulator-58f4c9b998-jd8tg |
Created |
Created container: sushy-emulator | |
sushy-emulator |
kubelet |
sushy-emulator-58f4c9b998-jd8tg |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1761151453" in 7.886s (7.887s including waiting). Image size: 326772052 bytes. | |
openshift-kube-controller-manager |
static-pod-installer |
installer-6-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Killing |
Stopping container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master-0_d5bc76b0-40b7-4d7e-9fd4-59ed401db1e8 became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Created |
Created container: cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready" |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master-0_b1d3941a-9bd7-408d-9dfa-84bb0d09e142 became leader | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 6" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master-0" from revision 5 to 6 because static pod is ready | |
default |
node-controller |
master-0 |
RegisteredNode |
Node master-0 event: Registered Node master-0 in Controller | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master-0_7061df03-e0ae-430f-89aa-09f1047afb76 became leader | |
sushy-emulator |
replicaset-controller |
nova-console-poller-76bf7fdbf7 |
SuccessfulCreate |
Created pod: nova-console-poller-76bf7fdbf7-kfl2c | |
sushy-emulator |
deployment-controller |
nova-console-poller |
ScalingReplicaSet |
Scaled up replica set nova-console-poller-76bf7fdbf7 to 1 | |
sushy-emulator |
multus |
nova-console-poller-76bf7fdbf7-kfl2c |
AddedInterface |
Add eth0 [10.128.0.118/23] from ovn-kubernetes | |
sushy-emulator |
kubelet |
nova-console-poller-76bf7fdbf7-kfl2c |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" | |
sushy-emulator |
kubelet |
nova-console-poller-76bf7fdbf7-kfl2c |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 5.309s (5.309s including waiting). Image size: 202633582 bytes. | |
sushy-emulator |
kubelet |
nova-console-poller-76bf7fdbf7-kfl2c |
Created |
Created container: console-poller-fa82354e-b365-4023-bd5d-feafc8ab34a4 | |
sushy-emulator |
kubelet |
nova-console-poller-76bf7fdbf7-kfl2c |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" | |
sushy-emulator |
kubelet |
nova-console-poller-76bf7fdbf7-kfl2c |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 418ms (418ms including waiting). Image size: 202633582 bytes. | |
sushy-emulator |
kubelet |
nova-console-poller-76bf7fdbf7-kfl2c |
Created |
Created container: console-poller-188bdd5d-8d4a-4c53-bae0-70b078fdcc0a | |
sushy-emulator |
kubelet |
nova-console-poller-76bf7fdbf7-kfl2c |
Started |
Started container console-poller-fa82354e-b365-4023-bd5d-feafc8ab34a4 | |
sushy-emulator |
kubelet |
nova-console-poller-76bf7fdbf7-kfl2c |
Started |
Started container console-poller-188bdd5d-8d4a-4c53-bae0-70b078fdcc0a | |
sushy-emulator |
deployment-controller |
nova-console-recorder |
ScalingReplicaSet |
Scaled up replica set nova-console-recorder-7ccbcf9885 to 1 | |
sushy-emulator |
replicaset-controller |
nova-console-recorder-7ccbcf9885 |
SuccessfulCreate |
Created pod: nova-console-recorder-7ccbcf9885-b7b8v | |
sushy-emulator |
kubelet |
nova-console-recorder-7ccbcf9885-b7b8v |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" | |
sushy-emulator |
multus |
nova-console-recorder-7ccbcf9885-b7b8v |
AddedInterface |
Add eth0 [10.128.0.119/23] from ovn-kubernetes | |
sushy-emulator |
kubelet |
nova-console-recorder-7ccbcf9885-b7b8v |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" in 13.16s (13.16s including waiting). Image size: 664104156 bytes. | |
sushy-emulator |
kubelet |
nova-console-recorder-7ccbcf9885-b7b8v |
Started |
Started container console-recorder-fa82354e-b365-4023-bd5d-feafc8ab34a4 | |
sushy-emulator |
kubelet |
nova-console-recorder-7ccbcf9885-b7b8v |
Pulling |
Pulling image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" | |
sushy-emulator |
kubelet |
nova-console-recorder-7ccbcf9885-b7b8v |
Pulled |
Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-recorder:latest" in 464ms (464ms including waiting). Image size: 664104156 bytes. | |
sushy-emulator |
kubelet |
nova-console-recorder-7ccbcf9885-b7b8v |
Created |
Created container: console-recorder-fa82354e-b365-4023-bd5d-feafc8ab34a4 | |
sushy-emulator |
kubelet |
nova-console-recorder-7ccbcf9885-b7b8v |
Created |
Created container: console-recorder-188bdd5d-8d4a-4c53-bae0-70b078fdcc0a | |
sushy-emulator |
kubelet |
nova-console-recorder-7ccbcf9885-b7b8v |
Started |
Started container console-recorder-188bdd5d-8d4a-4c53-bae0-70b078fdcc0a | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-storage namespace | |
openshift-marketplace |
job-controller |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54 |
SuccessfulCreate |
Created pod: 7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx | |
openshift-marketplace |
multus |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx |
AddedInterface |
Add eth0 [10.128.0.120/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx |
Started |
Started container util | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" in 1.579s (1.579s including waiting). Image size: 108204 bytes. | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine | |
openshift-marketplace |
kubelet |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4s65nx |
Started |
Started container extract | |
openshift-marketplace |
job-controller |
7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54 |
Completed |
Job completed | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsUnknown |
requirements not yet checked |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsUnknown |
requirements not yet checked |
| (x3) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsNotMet |
one or more requirements couldn't be found |
default |
endpoint-controller |
lvms-operator-metrics-service |
FailedToCreateEndpoint |
Failed to create endpoint for service openshift-storage/lvms-operator-metrics-service: endpoints "lvms-operator-metrics-service" already exists | |
| (x3) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
RequirementsNotMet |
one or more requirements couldn't be found |
openshift-storage |
replicaset-controller |
lvms-operator-59b4cb8ccf |
SuccessfulCreate |
Created pod: lvms-operator-59b4cb8ccf-q5dk5 | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
AllRequirementsMet |
all requirements found, attempting install |
openshift-storage |
replicaset-controller |
lvms-operator-59b4cb8ccf |
SuccessfulCreate |
Created pod: lvms-operator-59b4cb8ccf-q5dk5 | |
openshift-storage |
deployment-controller |
lvms-operator |
ScalingReplicaSet |
Scaled up replica set lvms-operator-59b4cb8ccf to 1 | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
waiting for install components to report healthy | |
openshift-storage |
deployment-controller |
lvms-operator |
ScalingReplicaSet |
Scaled up replica set lvms-operator-59b4cb8ccf to 1 | |
openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
waiting for install components to report healthy | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
AllRequirementsMet |
all requirements found, attempting install |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallWaiting |
installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability. |
openshift-storage |
multus |
lvms-operator-59b4cb8ccf-q5dk5 |
AddedInterface |
Add eth0 [10.128.0.121/23] from ovn-kubernetes | |
openshift-storage |
kubelet |
lvms-operator-59b4cb8ccf-q5dk5 |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" | |
openshift-storage |
kubelet |
lvms-operator-59b4cb8ccf-q5dk5 |
Pulling |
Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" | |
openshift-storage |
multus |
lvms-operator-59b4cb8ccf-q5dk5 |
AddedInterface |
Add eth0 [10.128.0.121/23] from ovn-kubernetes | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallWaiting |
installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability. |
openshift-storage |
kubelet |
lvms-operator-59b4cb8ccf-q5dk5 |
Started |
Started container manager | |
openshift-storage |
kubelet |
lvms-operator-59b4cb8ccf-q5dk5 |
Created |
Created container: manager | |
openshift-storage |
kubelet |
lvms-operator-59b4cb8ccf-q5dk5 |
Started |
Started container manager | |
openshift-storage |
kubelet |
lvms-operator-59b4cb8ccf-q5dk5 |
Created |
Created container: manager | |
openshift-storage |
kubelet |
lvms-operator-59b4cb8ccf-q5dk5 |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 4.879s (4.879s including waiting). Image size: 238305644 bytes. | |
openshift-storage |
kubelet |
lvms-operator-59b4cb8ccf-q5dk5 |
Pulled |
Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 4.879s (4.879s including waiting). Image size: 238305644 bytes. | |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
install strategy completed with no errors |
| (x2) | openshift-storage |
operator-lifecycle-manager |
lvms-operator.v4.18.4 |
InstallSucceeded |
install strategy completed with no errors |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for metallb-system namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for cert-manager-operator namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-nmstate namespace | |
openshift-marketplace |
job-controller |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c |
SuccessfulCreate |
Created pod: 925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9 | |
openshift-marketplace |
multus |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9 |
AddedInterface |
Add eth0 [10.128.0.122/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
job-controller |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cf971 |
SuccessfulCreate |
Created pod: a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9 |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9 |
Started |
Started container util | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds |
Started |
Started container util | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9 |
Pulling |
Pulling image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908" | |
openshift-marketplace |
multus |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds |
AddedInterface |
Add eth0 [10.128.0.123/23] from ovn-kubernetes | |
openshift-marketplace |
job-controller |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca17d05 |
SuccessfulCreate |
Created pod: f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc | |
openshift-marketplace |
multus |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc |
AddedInterface |
Add eth0 [10.128.0.124/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fe1daf2d4fdbcdbec3aea255d5b887fcf7fbd4db2b5917c360b916b31ebf64c1" | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc |
Started |
Started container util | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:d1fe0ac3bcc79ad46b9ed768a442d80da0bf4bdcb78e73b315d17bd1776721bf" | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:d1fe0ac3bcc79ad46b9ed768a442d80da0bf4bdcb78e73b315d17bd1776721bf" in 1.464s (1.464s including waiting). Image size: 176636 bytes. | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9 |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908" in 3.492s (3.492s including waiting). Image size: 108352841 bytes. | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9 |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9 |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fe1daf2d4fdbcdbec3aea255d5b887fcf7fbd4db2b5917c360b916b31ebf64c1" in 2.465s (2.465s including waiting). Image size: 329517 bytes. | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9 |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9 |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecan55kc |
Started |
Started container extract | |
openshift-marketplace |
kubelet |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2132lnds |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5bq8w9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine | |
openshift-marketplace |
job-controller |
f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca17d05 |
Completed |
Job completed | |
openshift-marketplace |
job-controller |
a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cf971 |
Completed |
Job completed | |
openshift-marketplace |
job-controller |
925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c |
Completed |
Job completed | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-marketplace |
job-controller |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f081954b |
SuccessfulCreate |
Created pod: 98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg | |
openshift-marketplace |
multus |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg |
AddedInterface |
Add eth0 [10.128.0.125/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg |
Started |
Started container util | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:a3b8e1f3f8d154095f365ccbb163f2cf3852d6091b1f74773a8b5a2ee5c1cee6" | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg |
Created |
Created container: util | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:a3b8e1f3f8d154095f365ccbb163f2cf3852d6091b1f74773a8b5a2ee5c1cee6" in 1.347s (1.347s including waiting). Image size: 4900233 bytes. | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg |
Started |
Started container pull | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg |
Created |
Created container: pull | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg |
Created |
Created container: extract | |
openshift-marketplace |
kubelet |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mmtbg |
Started |
Started container extract | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
RequirementsUnknown |
requirements not yet checked | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
RequirementsUnknown |
requirements not yet checked | |
openshift-marketplace |
job-controller |
98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f081954b |
Completed |
Job completed | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
RequirementsNotMet |
one or more requirements couldn't be found | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
RequirementsNotMet |
one or more requirements couldn't be found | |
default |
cert-manager-istio-csr-controller |
ControllerStarted |
controller is starting | ||
cert-manager |
deployment-controller |
cert-manager-webhook |
ScalingReplicaSet |
Scaled up replica set cert-manager-webhook-6888856db4 to 1 | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for cert-manager namespace | |
cert-manager |
deployment-controller |
cert-manager-webhook |
ScalingReplicaSet |
Scaled up replica set cert-manager-webhook-6888856db4 to 1 | |
| (x9) | cert-manager |
replicaset-controller |
cert-manager-webhook-6888856db4 |
FailedCreate |
Error creating: pods "cert-manager-webhook-6888856db4-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found |
cert-manager |
deployment-controller |
cert-manager-cainjector |
ScalingReplicaSet |
Scaled up replica set cert-manager-cainjector-5545bd876 to 1 | |
| (x9) | cert-manager |
replicaset-controller |
cert-manager-webhook-6888856db4 |
FailedCreate |
Error creating: pods "cert-manager-webhook-6888856db4-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found |
cert-manager |
deployment-controller |
cert-manager-cainjector |
ScalingReplicaSet |
Scaled up replica set cert-manager-cainjector-5545bd876 to 1 | |
| (x9) | cert-manager |
replicaset-controller |
cert-manager-cainjector-5545bd876 |
FailedCreate |
Error creating: pods "cert-manager-cainjector-5545bd876-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found |
cert-manager |
replicaset-controller |
cert-manager-webhook-6888856db4 |
SuccessfulCreate |
Created pod: cert-manager-webhook-6888856db4-d6jf7 | |
| (x9) | cert-manager |
replicaset-controller |
cert-manager-cainjector-5545bd876 |
FailedCreate |
Error creating: pods "cert-manager-cainjector-5545bd876-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found |
cert-manager |
replicaset-controller |
cert-manager-webhook-6888856db4 |
SuccessfulCreate |
Created pod: cert-manager-webhook-6888856db4-d6jf7 | |
cert-manager |
multus |
cert-manager-webhook-6888856db4-d6jf7 |
AddedInterface |
Add eth0 [10.128.0.127/23] from ovn-kubernetes | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-d6jf7 |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
cert-manager |
replicaset-controller |
cert-manager-cainjector-5545bd876 |
SuccessfulCreate |
Created pod: cert-manager-cainjector-5545bd876-62r82 | |
cert-manager |
replicaset-controller |
cert-manager-cainjector-5545bd876 |
SuccessfulCreate |
Created pod: cert-manager-cainjector-5545bd876-62r82 | |
cert-manager |
multus |
cert-manager-webhook-6888856db4-d6jf7 |
AddedInterface |
Add eth0 [10.128.0.127/23] from ovn-kubernetes | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-d6jf7 |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
RequirementsUnknown |
requirements not yet checked | |
cert-manager |
deployment-controller |
cert-manager |
ScalingReplicaSet |
Scaled up replica set cert-manager-545d4d4674 to 1 | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-62r82 |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
cert-manager |
multus |
cert-manager-cainjector-5545bd876-62r82 |
AddedInterface |
Add eth0 [10.128.0.128/23] from ovn-kubernetes | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
RequirementsUnknown |
requirements not yet checked | |
cert-manager |
multus |
cert-manager-cainjector-5545bd876-62r82 |
AddedInterface |
Add eth0 [10.128.0.128/23] from ovn-kubernetes | |
cert-manager |
deployment-controller |
cert-manager |
ScalingReplicaSet |
Scaled up replica set cert-manager-545d4d4674 to 1 | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-62r82 |
Pulling |
Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" | |
openshift-nmstate |
multus |
nmstate-operator-694c9596b7-vbkqw |
AddedInterface |
Add eth0 [10.128.0.129/23] from ovn-kubernetes | |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
AllRequirementsMet |
all requirements found, attempting install |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
InstallWaiting |
installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability. |
openshift-nmstate |
deployment-controller |
nmstate-operator |
ScalingReplicaSet |
Scaled up replica set nmstate-operator-694c9596b7 to 1 | |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
InstallSucceeded |
waiting for install components to report healthy |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
AllRequirementsMet |
all requirements found, attempting install |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
InstallSucceeded |
waiting for install components to report healthy |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-vbkqw |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce" | |
openshift-nmstate |
deployment-controller |
nmstate-operator |
ScalingReplicaSet |
Scaled up replica set nmstate-operator-694c9596b7 to 1 | |
openshift-nmstate |
replicaset-controller |
nmstate-operator-694c9596b7 |
SuccessfulCreate |
Created pod: nmstate-operator-694c9596b7-vbkqw | |
openshift-nmstate |
multus |
nmstate-operator-694c9596b7-vbkqw |
AddedInterface |
Add eth0 [10.128.0.129/23] from ovn-kubernetes | |
openshift-nmstate |
replicaset-controller |
nmstate-operator-694c9596b7 |
SuccessfulCreate |
Created pod: nmstate-operator-694c9596b7-vbkqw | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-vbkqw |
Pulling |
Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce" | |
| (x2) | openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
InstallWaiting |
installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability. |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-vbkqw |
Started |
Started container nmstate-operator | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-62r82 |
Created |
Created container: cert-manager-cainjector | |
metallb-system |
deployment-controller |
metallb-operator-webhook-server |
ScalingReplicaSet |
Scaled up replica set metallb-operator-webhook-server-7664575c4d to 1 | |
metallb-system |
replicaset-controller |
metallb-operator-webhook-server-7664575c4d |
SuccessfulCreate |
Created pod: metallb-operator-webhook-server-7664575c4d-8f7gv | |
metallb-system |
deployment-controller |
metallb-operator-webhook-server |
ScalingReplicaSet |
Scaled up replica set metallb-operator-webhook-server-7664575c4d to 1 | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-vbkqw |
Created |
Created container: nmstate-operator | |
| (x11) | cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
FailedCreate |
Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-d6jf7 |
Started |
Started container cert-manager-webhook | |
metallb-system |
deployment-controller |
metallb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set metallb-operator-controller-manager-7f874cc45d to 1 | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-vbkqw |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce" in 4.223s (4.223s including waiting). Image size: 451308023 bytes. | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-vbkqw |
Started |
Started container nmstate-operator | |
metallb-system |
replicaset-controller |
metallb-operator-controller-manager-7f874cc45d |
SuccessfulCreate |
Created pod: metallb-operator-controller-manager-7f874cc45d-jsprx | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-vbkqw |
Created |
Created container: nmstate-operator | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-62r82 |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 5.733s (5.733s including waiting). Image size: 319887149 bytes. | |
openshift-nmstate |
kubelet |
nmstate-operator-694c9596b7-vbkqw |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce" in 4.223s (4.223s including waiting). Image size: 451308023 bytes. | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-d6jf7 |
Created |
Created container: cert-manager-webhook | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-62r82 |
Created |
Created container: cert-manager-cainjector | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-62r82 |
Started |
Started container cert-manager-cainjector | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-d6jf7 |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 6.69s (6.69s including waiting). Image size: 319887149 bytes. | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-62r82 |
Started |
Started container cert-manager-cainjector | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-d6jf7 |
Created |
Created container: cert-manager-webhook | |
metallb-system |
deployment-controller |
metallb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set metallb-operator-controller-manager-7f874cc45d to 1 | |
metallb-system |
replicaset-controller |
metallb-operator-controller-manager-7f874cc45d |
SuccessfulCreate |
Created pod: metallb-operator-controller-manager-7f874cc45d-jsprx | |
metallb-system |
replicaset-controller |
metallb-operator-webhook-server-7664575c4d |
SuccessfulCreate |
Created pod: metallb-operator-webhook-server-7664575c4d-8f7gv | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-d6jf7 |
Started |
Started container cert-manager-webhook | |
cert-manager |
kubelet |
cert-manager-cainjector-5545bd876-62r82 |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 5.733s (5.733s including waiting). Image size: 319887149 bytes. | |
cert-manager |
kubelet |
cert-manager-webhook-6888856db4-d6jf7 |
Pulled |
Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 6.69s (6.69s including waiting). Image size: 319887149 bytes. | |
| (x11) | cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
FailedCreate |
Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
RequirementsUnknown |
requirements not yet checked | |
metallb-system |
multus |
metallb-operator-webhook-server-7664575c4d-8f7gv |
AddedInterface |
Add eth0 [10.128.0.131/23] from ovn-kubernetes | |
kube-system |
cert-manager-cainjector-5545bd876-62r82_378069ad-803c-48d5-bdbd-cdcbd6e18c09 |
cert-manager-cainjector-leader-election |
LeaderElection |
cert-manager-cainjector-5545bd876-62r82_378069ad-803c-48d5-bdbd-cdcbd6e18c09 became leader | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
InstallSucceeded |
install strategy completed with no errors | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
RequirementsUnknown |
requirements not yet checked | |
openshift-nmstate |
operator-lifecycle-manager |
kubernetes-nmstate-operator.4.18.0-202602041913 |
InstallSucceeded |
install strategy completed with no errors | |
metallb-system |
multus |
metallb-operator-controller-manager-7f874cc45d-jsprx |
AddedInterface |
Add eth0 [10.128.0.130/23] from ovn-kubernetes | |
metallb-system |
kubelet |
metallb-operator-controller-manager-7f874cc45d-jsprx |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854" | |
metallb-system |
multus |
metallb-operator-webhook-server-7664575c4d-8f7gv |
AddedInterface |
Add eth0 [10.128.0.131/23] from ovn-kubernetes | |
metallb-system |
multus |
metallb-operator-controller-manager-7f874cc45d-jsprx |
AddedInterface |
Add eth0 [10.128.0.130/23] from ovn-kubernetes | |
metallb-system |
kubelet |
metallb-operator-controller-manager-7f874cc45d-jsprx |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854" | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
RequirementsNotMet |
one or more requirements couldn't be found | |
metallb-system |
kubelet |
metallb-operator-webhook-server-7664575c4d-8f7gv |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" | |
metallb-system |
kubelet |
metallb-operator-webhook-server-7664575c4d-8f7gv |
Pulling |
Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
RequirementsNotMet |
one or more requirements couldn't be found | |
| (x2) | openshift-operators |
controllermanager |
obo-prometheus-operator-admission-webhook |
NoPods |
No matching pods found |
| (x2) | openshift-operators |
controllermanager |
obo-prometheus-operator-admission-webhook |
NoPods |
No matching pods found |
cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
SuccessfulCreate |
Created pod: cert-manager-545d4d4674-xrzb8 | |
cert-manager |
replicaset-controller |
cert-manager-545d4d4674 |
SuccessfulCreate |
Created pod: cert-manager-545d4d4674-xrzb8 | |
cert-manager |
multus |
cert-manager-545d4d4674-xrzb8 |
AddedInterface |
Add eth0 [10.128.0.132/23] from ovn-kubernetes | |
cert-manager |
multus |
cert-manager-545d4d4674-xrzb8 |
AddedInterface |
Add eth0 [10.128.0.132/23] from ovn-kubernetes | |
metallb-system |
kubelet |
metallb-operator-controller-manager-7f874cc45d-jsprx |
Started |
Started container manager | |
cert-manager |
kubelet |
cert-manager-545d4d4674-xrzb8 |
Created |
Created container: cert-manager-controller | |
metallb-system |
kubelet |
metallb-operator-webhook-server-7664575c4d-8f7gv |
Created |
Created container: webhook-server | |
metallb-system |
kubelet |
metallb-operator-webhook-server-7664575c4d-8f7gv |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" in 5.718s (5.718s including waiting). Image size: 554925471 bytes. | |
metallb-system |
kubelet |
metallb-operator-controller-manager-7f874cc45d-jsprx |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854" in 5.954s (5.954s including waiting). Image size: 462337664 bytes. | |
metallb-system |
kubelet |
metallb-operator-controller-manager-7f874cc45d-jsprx |
Started |
Started container manager | |
metallb-system |
kubelet |
metallb-operator-controller-manager-7f874cc45d-jsprx |
Created |
Created container: manager | |
cert-manager |
kubelet |
cert-manager-545d4d4674-xrzb8 |
Pulled |
Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine | |
metallb-system |
kubelet |
metallb-operator-webhook-server-7664575c4d-8f7gv |
Started |
Started container webhook-server | |
metallb-system |
kubelet |
metallb-operator-controller-manager-7f874cc45d-jsprx |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854" in 5.954s (5.954s including waiting). Image size: 462337664 bytes. | |
cert-manager |
kubelet |
cert-manager-545d4d4674-xrzb8 |
Pulled |
Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine | |
cert-manager |
kubelet |
cert-manager-545d4d4674-xrzb8 |
Started |
Started container cert-manager-controller | |
metallb-system |
kubelet |
metallb-operator-webhook-server-7664575c4d-8f7gv |
Started |
Started container webhook-server | |
cert-manager |
kubelet |
cert-manager-545d4d4674-xrzb8 |
Started |
Started container cert-manager-controller | |
cert-manager |
kubelet |
cert-manager-545d4d4674-xrzb8 |
Created |
Created container: cert-manager-controller | |
metallb-system |
kubelet |
metallb-operator-controller-manager-7f874cc45d-jsprx |
Created |
Created container: manager | |
metallb-system |
kubelet |
metallb-operator-webhook-server-7664575c4d-8f7gv |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" in 5.718s (5.718s including waiting). Image size: 554925471 bytes. | |
metallb-system |
kubelet |
metallb-operator-webhook-server-7664575c4d-8f7gv |
Created |
Created container: webhook-server | |
metallb-system |
metallb-operator-controller-manager-7f874cc45d-jsprx_f933412a-8212-4110-b7b0-b71653b62300 |
metallb.io.metallboperator |
LeaderElection |
metallb-operator-controller-manager-7f874cc45d-jsprx_f933412a-8212-4110-b7b0-b71653b62300 became leader | |
metallb-system |
metallb-operator-controller-manager-7f874cc45d-jsprx_f933412a-8212-4110-b7b0-b71653b62300 |
metallb.io.metallboperator |
LeaderElection |
metallb-operator-controller-manager-7f874cc45d-jsprx_f933412a-8212-4110-b7b0-b71653b62300 became leader | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
AllRequirementsMet |
all requirements found, attempting install | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-68bc856cb9 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-68bc856cb9-5tqc8 | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-68bc856cb9 |
SuccessfulCreate |
Created pod: obo-prometheus-operator-68bc856cb9-5tqc8 | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
AllRequirementsMet |
all requirements found, attempting install | |
openshift-operators |
deployment-controller |
obo-prometheus-operator |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-68bc856cb9 to 1 | |
openshift-operators |
deployment-controller |
obo-prometheus-operator |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-68bc856cb9 to 1 | |
openshift-operators |
multus |
obo-prometheus-operator-68bc856cb9-5tqc8 |
AddedInterface |
Add eth0 [10.128.0.133/23] from ovn-kubernetes | |
openshift-operators |
deployment-controller |
perses-operator |
ScalingReplicaSet |
Scaled up replica set perses-operator-5bf474d74f to 1 | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-79ffb45c8c |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-79ffb45c8c |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7 | |
openshift-operators |
multus |
obo-prometheus-operator-68bc856cb9-5tqc8 |
AddedInterface |
Add eth0 [10.128.0.133/23] from ovn-kubernetes | |
openshift-operators |
deployment-controller |
obo-prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-admission-webhook-79ffb45c8c to 2 | |
openshift-operators |
deployment-controller |
obo-prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set obo-prometheus-operator-admission-webhook-79ffb45c8c to 2 | |
openshift-operators |
replicaset-controller |
observability-operator-59bdc8b94 |
SuccessfulCreate |
Created pod: observability-operator-59bdc8b94-d8nkj | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7 |
AddedInterface |
Add eth0 [10.128.0.134/23] from ovn-kubernetes | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallSucceeded |
waiting for install components to report healthy | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-5tqc8 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-5tqc8 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" | |
openshift-operators |
replicaset-controller |
perses-operator-5bf474d74f |
SuccessfulCreate |
Created pod: perses-operator-5bf474d74f-tw9pm | |
openshift-operators |
deployment-controller |
observability-operator |
ScalingReplicaSet |
Scaled up replica set observability-operator-59bdc8b94 to 1 | |
openshift-operators |
replicaset-controller |
observability-operator-59bdc8b94 |
SuccessfulCreate |
Created pod: observability-operator-59bdc8b94-d8nkj | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-79ffb45c8c |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7 | |
openshift-operators |
deployment-controller |
observability-operator |
ScalingReplicaSet |
Scaled up replica set observability-operator-59bdc8b94 to 1 | |
openshift-operators |
deployment-controller |
perses-operator |
ScalingReplicaSet |
Scaled up replica set perses-operator-5bf474d74f to 1 | |
openshift-operators |
replicaset-controller |
obo-prometheus-operator-admission-webhook-79ffb45c8c |
SuccessfulCreate |
Created pod: obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallSucceeded |
waiting for install components to report healthy | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7 |
AddedInterface |
Add eth0 [10.128.0.134/23] from ovn-kubernetes | |
openshift-operators |
replicaset-controller |
perses-operator-5bf474d74f |
SuccessfulCreate |
Created pod: perses-operator-5bf474d74f-tw9pm | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-d8nkj |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-operators |
multus |
observability-operator-59bdc8b94-d8nkj |
AddedInterface |
Add eth0 [10.128.0.136/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-d8nkj |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7 |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s |
AddedInterface |
Add eth0 [10.128.0.135/23] from ovn-kubernetes | |
openshift-operators |
multus |
observability-operator-59bdc8b94-d8nkj |
AddedInterface |
Add eth0 [10.128.0.136/23] from ovn-kubernetes | |
openshift-operators |
multus |
obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s |
AddedInterface |
Add eth0 [10.128.0.135/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" | |
openshift-operators |
multus |
perses-operator-5bf474d74f-tw9pm |
AddedInterface |
Add eth0 [10.128.0.137/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-tw9pm |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" | |
openshift-operators |
multus |
perses-operator-5bf474d74f-tw9pm |
AddedInterface |
Add eth0 [10.128.0.137/23] from ovn-kubernetes | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-tw9pm |
Pulling |
Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallWaiting |
installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability. | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallWaiting |
installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability. | |
| (x2) | metallb-system |
operator-lifecycle-manager |
install-6g6vh |
AppliedWithWarnings |
1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202601302238" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2 |
| (x2) | metallb-system |
operator-lifecycle-manager |
install-6g6vh |
AppliedWithWarnings |
1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202601302238" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2 |
| (x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
NeedsReinstall |
calculated deployment install is bad |
| (x2) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
NeedsReinstall |
calculated deployment install is bad |
| (x3) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
AllRequirementsMet |
all requirements found, attempting install |
| (x3) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
AllRequirementsMet |
all requirements found, attempting install |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-5tqc8 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" in 12.062s (12.062s including waiting). Image size: 199215153 bytes. | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-d8nkj |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" in 11.652s (11.652s including waiting). Image size: 399540002 bytes. | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-tw9pm |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" in 11.572s (11.572s including waiting). Image size: 174807977 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 11.811s (11.811s including waiting). Image size: 151103408 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 11.921s (11.921s including waiting). Image size: 151103408 bytes. | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-tw9pm |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" in 11.572s (11.572s including waiting). Image size: 174807977 bytes. | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-d8nkj |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" in 11.652s (11.652s including waiting). Image size: 399540002 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-5tqc8 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" in 12.062s (12.062s including waiting). Image size: 199215153 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7 |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 11.921s (11.921s including waiting). Image size: 151103408 bytes. | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s |
Pulled |
Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 11.811s (11.811s including waiting). Image size: 151103408 bytes. | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-tw9pm |
Created |
Created container: perses-operator | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-d8nkj |
Created |
Created container: operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-5tqc8 |
Created |
Created container: prometheus-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7 |
Started |
Started container prometheus-operator-admission-webhook | |
| (x3) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
InstallSucceeded |
waiting for install components to report healthy |
| (x3) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
InstallSucceeded |
waiting for install components to report healthy |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7 |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-tw9pm |
Started |
Started container perses-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-5tqc8 |
Created |
Created container: prometheus-operator | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-tw9pm |
Started |
Started container perses-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-5tqc8 |
Started |
Started container prometheus-operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-68bc856cb9-5tqc8 |
Started |
Started container prometheus-operator | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-d8nkj |
Started |
Started container operator | |
openshift-operators |
kubelet |
perses-operator-5bf474d74f-tw9pm |
Created |
Created container: perses-operator | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-d8nkj |
Created |
Created container: operator | |
openshift-operators |
kubelet |
observability-operator-59bdc8b94-d8nkj |
Started |
Started container operator | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7 |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-79ffb45c8c-gqgb7 |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
kubelet |
obo-prometheus-operator-admission-webhook-79ffb45c8c-9jw4s |
Created |
Created container: prometheus-operator-admission-webhook | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallWaiting |
installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability. | |
| (x3) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
InstallWaiting |
installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability. |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallWaiting |
installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability. | |
| (x3) | metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
InstallWaiting |
installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability. |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallSucceeded |
install strategy completed with no errors | |
openshift-operators |
operator-lifecycle-manager |
cluster-observability-operator.v1.3.1 |
InstallSucceeded |
install strategy completed with no errors | |
kube-system |
cert-manager-leader-election |
cert-manager-controller |
LeaderElection |
cert-manager-545d4d4674-xrzb8-external-cert-manager-controller became leader | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
InstallSucceeded |
install strategy completed with no errors | |
metallb-system |
operator-lifecycle-manager |
metallb-operator.v4.18.0-202601302238 |
InstallSucceeded |
install strategy completed with no errors | |
metallb-system |
daemonset-controller |
speaker |
SuccessfulCreate |
Created pod: speaker-mj82t | |
metallb-system |
replicaset-controller |
frr-k8s-webhook-server-78b44bf5bb |
SuccessfulCreate |
Created pod: frr-k8s-webhook-server-78b44bf5bb-x52ls | |
metallb-system |
replicaset-controller |
controller-69bbfbf88f |
SuccessfulCreate |
Created pod: controller-69bbfbf88f-8w79x | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "frr-k8s-certs-secret" not found | |
metallb-system |
daemonset-controller |
speaker |
SuccessfulCreate |
Created pod: speaker-mj82t | |
metallb-system |
replicaset-controller |
controller-69bbfbf88f |
SuccessfulCreate |
Created pod: controller-69bbfbf88f-8w79x | |
metallb-system |
deployment-controller |
frr-k8s-webhook-server |
ScalingReplicaSet |
Scaled up replica set frr-k8s-webhook-server-78b44bf5bb to 1 | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "frr-k8s-certs-secret" not found | |
metallb-system |
daemonset-controller |
frr-k8s |
SuccessfulCreate |
Created pod: frr-k8s-t5g7s | |
metallb-system |
replicaset-controller |
frr-k8s-webhook-server-78b44bf5bb |
SuccessfulCreate |
Created pod: frr-k8s-webhook-server-78b44bf5bb-x52ls | |
metallb-system |
deployment-controller |
controller |
ScalingReplicaSet |
Scaled up replica set controller-69bbfbf88f to 1 | |
metallb-system |
deployment-controller |
controller |
ScalingReplicaSet |
Scaled up replica set controller-69bbfbf88f to 1 | |
default |
garbage-collector-controller |
frr-k8s-validating-webhook-configuration |
OwnerRefInvalidNamespace |
ownerRef [metallb.io/v1beta1/MetalLB, namespace: , name: metallb, uid: 6b51b0bd-bb26-48a9-bf4d-7b6a29dd6910] does not exist in namespace "" | |
metallb-system |
daemonset-controller |
frr-k8s |
SuccessfulCreate |
Created pod: frr-k8s-t5g7s | |
metallb-system |
deployment-controller |
frr-k8s-webhook-server |
ScalingReplicaSet |
Scaled up replica set frr-k8s-webhook-server-78b44bf5bb to 1 | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-x52ls |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" | |
metallb-system |
multus |
frr-k8s-webhook-server-78b44bf5bb-x52ls |
AddedInterface |
Add eth0 [10.128.0.138/23] from ovn-kubernetes | |
metallb-system |
kubelet |
controller-69bbfbf88f-8w79x |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "controller-certs-secret" not found | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" | |
metallb-system |
multus |
frr-k8s-webhook-server-78b44bf5bb-x52ls |
AddedInterface |
Add eth0 [10.128.0.138/23] from ovn-kubernetes | |
metallb-system |
kubelet |
controller-69bbfbf88f-8w79x |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "controller-certs-secret" not found | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-x52ls |
Pulling |
Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" | |
openshift-nmstate |
deployment-controller |
nmstate-metrics |
ScalingReplicaSet |
Scaled up replica set nmstate-metrics-58c85c668d to 1 | |
metallb-system |
kubelet |
controller-69bbfbf88f-8w79x |
Started |
Started container controller | |
metallb-system |
kubelet |
controller-69bbfbf88f-8w79x |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" | |
| (x3) | metallb-system |
kubelet |
speaker-mj82t |
FailedMount |
MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found |
metallb-system |
kubelet |
controller-69bbfbf88f-8w79x |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" | |
metallb-system |
kubelet |
controller-69bbfbf88f-8w79x |
Started |
Started container controller | |
metallb-system |
kubelet |
controller-69bbfbf88f-8w79x |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine | |
| (x3) | metallb-system |
kubelet |
speaker-mj82t |
FailedMount |
MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found |
metallb-system |
kubelet |
controller-69bbfbf88f-8w79x |
Created |
Created container: controller | |
metallb-system |
multus |
controller-69bbfbf88f-8w79x |
AddedInterface |
Add eth0 [10.128.0.139/23] from ovn-kubernetes | |
metallb-system |
kubelet |
controller-69bbfbf88f-8w79x |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine | |
metallb-system |
multus |
controller-69bbfbf88f-8w79x |
AddedInterface |
Add eth0 [10.128.0.139/23] from ovn-kubernetes | |
openshift-nmstate |
deployment-controller |
nmstate-metrics |
ScalingReplicaSet |
Scaled up replica set nmstate-metrics-58c85c668d to 1 | |
metallb-system |
kubelet |
controller-69bbfbf88f-8w79x |
Created |
Created container: controller | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xtbrb |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" | |
| (x13) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/console -n openshift-console because it changed |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xtbrb |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" | |
openshift-nmstate |
daemonset-controller |
nmstate-handler |
SuccessfulCreate |
Created pod: nmstate-handler-44nvt | |
openshift-nmstate |
multus |
nmstate-metrics-58c85c668d-xtbrb |
AddedInterface |
Add eth0 [10.128.0.140/23] from ovn-kubernetes | |
openshift-nmstate |
multus |
nmstate-metrics-58c85c668d-xtbrb |
AddedInterface |
Add eth0 [10.128.0.140/23] from ovn-kubernetes | |
openshift-console |
replicaset-controller |
console-5995fb765 |
SuccessfulCreate |
Created pod: console-5995fb765-xddwx | |
openshift-nmstate |
replicaset-controller |
nmstate-metrics-58c85c668d |
SuccessfulCreate |
Created pod: nmstate-metrics-58c85c668d-xtbrb | |
openshift-nmstate |
replicaset-controller |
nmstate-metrics-58c85c668d |
SuccessfulCreate |
Created pod: nmstate-metrics-58c85c668d-xtbrb | |
openshift-nmstate |
kubelet |
nmstate-handler-44nvt |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" | |
openshift-nmstate |
deployment-controller |
nmstate-console-plugin |
ScalingReplicaSet |
Scaled up replica set nmstate-console-plugin-5c78fc5d65 to 1 | |
openshift-nmstate |
replicaset-controller |
nmstate-console-plugin-5c78fc5d65 |
SuccessfulCreate |
Created pod: nmstate-console-plugin-5c78fc5d65-c9ckb | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-4q7kf |
FailedMount |
MountVolume.SetUp failed for volume "tls-key-pair" : secret "openshift-nmstate-webhook" not found | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-c9ckb |
Pulling |
Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"console.openshift.io" "consoleplugins" "" "nmstate-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected") | |
openshift-nmstate |
multus |
nmstate-console-plugin-5c78fc5d65-c9ckb |
AddedInterface |
Add eth0 [10.128.0.142/23] from ovn-kubernetes | |
openshift-nmstate |
deployment-controller |
nmstate-webhook |
ScalingReplicaSet |
Scaled up replica set nmstate-webhook-866bcb46dc to 1 | |
openshift-nmstate |
replicaset-controller |
nmstate-webhook-866bcb46dc |
SuccessfulCreate |
Created pod: nmstate-webhook-866bcb46dc-4q7kf | |
openshift-nmstate |
daemonset-controller |
nmstate-handler |
SuccessfulCreate |
Created pod: nmstate-handler-44nvt | |
openshift-nmstate |
multus |
nmstate-console-plugin-5c78fc5d65-c9ckb |
AddedInterface |
Add eth0 [10.128.0.142/23] from ovn-kubernetes | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-c9ckb |
Pulling |
Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078" | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-5995fb765 to 1 | |
openshift-nmstate |
replicaset-controller |
nmstate-webhook-866bcb46dc |
SuccessfulCreate |
Created pod: nmstate-webhook-866bcb46dc-4q7kf | |
| (x3) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapUpdated |
Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-4q7kf |
FailedMount |
MountVolume.SetUp failed for volume "tls-key-pair" : secret "openshift-nmstate-webhook" not found | |
openshift-nmstate |
kubelet |
nmstate-handler-44nvt |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" | |
openshift-nmstate |
replicaset-controller |
nmstate-console-plugin-5c78fc5d65 |
SuccessfulCreate |
Created pod: nmstate-console-plugin-5c78fc5d65-c9ckb | |
openshift-nmstate |
deployment-controller |
nmstate-console-plugin |
ScalingReplicaSet |
Scaled up replica set nmstate-console-plugin-5c78fc5d65 to 1 | |
openshift-nmstate |
deployment-controller |
nmstate-webhook |
ScalingReplicaSet |
Scaled up replica set nmstate-webhook-866bcb46dc to 1 | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-4q7kf |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" | |
openshift-console |
multus |
console-5995fb765-xddwx |
AddedInterface |
Add eth0 [10.128.0.143/23] from ovn-kubernetes | |
openshift-nmstate |
multus |
nmstate-webhook-866bcb46dc-4q7kf |
AddedInterface |
Add eth0 [10.128.0.141/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-5995fb765-xddwx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" already present on machine | |
openshift-console |
kubelet |
console-5995fb765-xddwx |
Created |
Created container: console | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-4q7kf |
Pulling |
Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" | |
openshift-console |
kubelet |
console-5995fb765-xddwx |
Started |
Started container console | |
openshift-nmstate |
multus |
nmstate-webhook-866bcb46dc-4q7kf |
AddedInterface |
Add eth0 [10.128.0.141/23] from ovn-kubernetes | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.32, 1 replicas available" | |
metallb-system |
kubelet |
speaker-mj82t |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine | |
metallb-system |
kubelet |
speaker-mj82t |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine | |
metallb-system |
kubelet |
controller-69bbfbf88f-8w79x |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 2.458s (2.458s including waiting). Image size: 464998810 bytes. | |
metallb-system |
kubelet |
speaker-mj82t |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine | |
metallb-system |
kubelet |
controller-69bbfbf88f-8w79x |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 2.458s (2.458s including waiting). Image size: 464998810 bytes. | |
metallb-system |
kubelet |
speaker-mj82t |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
controller-69bbfbf88f-8w79x |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
controller-69bbfbf88f-8w79x |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-mj82t |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-mj82t |
Pulled |
Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine | |
metallb-system |
kubelet |
speaker-mj82t |
Started |
Started container speaker | |
metallb-system |
kubelet |
speaker-mj82t |
Created |
Created container: speaker | |
metallb-system |
kubelet |
speaker-mj82t |
Started |
Started container speaker | |
metallb-system |
kubelet |
controller-69bbfbf88f-8w79x |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-mj82t |
Created |
Created container: speaker | |
metallb-system |
kubelet |
speaker-mj82t |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
controller-69bbfbf88f-8w79x |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
speaker-mj82t |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xtbrb |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 5.059s (5.059s including waiting). Image size: 498436272 bytes. | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 7.044s (7.044s including waiting). Image size: 662037039 bytes. | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-c9ckb |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078" in 4.934s (4.934s including waiting). Image size: 453642085 bytes. | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 7.044s (7.044s including waiting). Image size: 662037039 bytes. | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-4q7kf |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 4.438s (4.438s including waiting). Image size: 498436272 bytes. | |
openshift-nmstate |
kubelet |
nmstate-handler-44nvt |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 5.419s (5.419s including waiting). Image size: 498436272 bytes. | |
openshift-nmstate |
kubelet |
nmstate-handler-44nvt |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 5.419s (5.419s including waiting). Image size: 498436272 bytes. | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xtbrb |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 5.059s (5.059s including waiting). Image size: 498436272 bytes. | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-c9ckb |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078" in 4.934s (4.934s including waiting). Image size: 453642085 bytes. | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-4q7kf |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 4.438s (4.438s including waiting). Image size: 498436272 bytes. | |
openshift-nmstate |
kubelet |
nmstate-handler-44nvt |
Started |
Started container nmstate-handler | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-4q7kf |
Created |
Created container: nmstate-webhook | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-x52ls |
Created |
Created container: frr-k8s-webhook-server | |
openshift-nmstate |
kubelet |
nmstate-handler-44nvt |
Created |
Created container: nmstate-handler | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-x52ls |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 7.401s (7.401s including waiting). Image size: 662037039 bytes. | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xtbrb |
Created |
Created container: nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xtbrb |
Started |
Started container nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xtbrb |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xtbrb |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xtbrb |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-x52ls |
Started |
Started container frr-k8s-webhook-server | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xtbrb |
Created |
Created container: nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xtbrb |
Started |
Started container nmstate-metrics | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xtbrb |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xtbrb |
Created |
Created container: kube-rbac-proxy | |
openshift-nmstate |
kubelet |
nmstate-metrics-58c85c668d-xtbrb |
Started |
Started container kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-x52ls |
Created |
Created container: frr-k8s-webhook-server | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Started |
Started container cp-frr-files | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-c9ckb |
Started |
Started container nmstate-console-plugin | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-c9ckb |
Created |
Created container: nmstate-console-plugin | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-x52ls |
Started |
Started container frr-k8s-webhook-server | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Started |
Started container cp-frr-files | |
metallb-system |
kubelet |
frr-k8s-webhook-server-78b44bf5bb-x52ls |
Pulled |
Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 7.401s (7.401s including waiting). Image size: 662037039 bytes. | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
openshift-nmstate |
kubelet |
nmstate-handler-44nvt |
Started |
Started container nmstate-handler | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Created |
Created container: cp-frr-files | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Created |
Created container: cp-reloader | |
openshift-nmstate |
kubelet |
nmstate-handler-44nvt |
Created |
Created container: nmstate-handler | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-4q7kf |
Started |
Started container nmstate-webhook | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-4q7kf |
Started |
Started container nmstate-webhook | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Created |
Created container: cp-frr-files | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Created |
Created container: cp-reloader | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Started |
Started container cp-reloader | |
openshift-nmstate |
kubelet |
nmstate-webhook-866bcb46dc-4q7kf |
Created |
Created container: nmstate-webhook | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-c9ckb |
Created |
Created container: nmstate-console-plugin | |
openshift-nmstate |
kubelet |
nmstate-console-plugin-5c78fc5d65-c9ckb |
Started |
Started container nmstate-console-plugin | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Started |
Started container cp-reloader | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Created |
Created container: cp-metrics | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Started |
Started container cp-metrics | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Started |
Started container cp-metrics | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Created |
Created container: cp-metrics | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Created |
Created container: controller | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Created |
Created container: frr | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Started |
Started container frr | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Started |
Started container controller | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Created |
Created container: frr | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Started |
Started container controller | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Created |
Created container: controller | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Started |
Started container frr | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Started |
Started container reloader | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Started |
Started container reloader | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Created |
Created container: reloader | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Started |
Started container frr-metrics | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Created |
Created container: kube-rbac-proxy | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Pulled |
Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Started |
Started container frr-metrics | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Created |
Created container: frr-metrics | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Pulled |
Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Created |
Created container: frr-metrics | |
metallb-system |
kubelet |
frr-k8s-t5g7s |
Created |
Created container: reloader | |
openshift-console |
kubelet |
console-6f45cc898f-z9tb2 |
Killing |
Stopping container console | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-6f45cc898f to 0 from 1 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from True to False ("All is well") | |
openshift-console |
replicaset-controller |
console-6f45cc898f |
SuccessfulDelete |
Deleted pod: console-6f45cc898f-z9tb2 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.32, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.32, 2 replicas available" | |
openshift-storage |
daemonset-controller |
vg-manager |
SuccessfulCreate |
Created pod: vg-manager-5rvk7 | |
openshift-storage |
daemonset-controller |
vg-manager |
SuccessfulCreate |
Created pod: vg-manager-5rvk7 | |
openshift-storage |
multus |
vg-manager-5rvk7 |
AddedInterface |
Add eth0 [10.128.0.144/23] from ovn-kubernetes | |
openshift-storage |
multus |
vg-manager-5rvk7 |
AddedInterface |
Add eth0 [10.128.0.144/23] from ovn-kubernetes | |
| (x12) | openshift-storage |
LVMClusterReconciler |
lvmcluster |
ResourceReconciliationIncomplete |
LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io |
| (x12) | openshift-storage |
LVMClusterReconciler |
lvmcluster |
ResourceReconciliationIncomplete |
LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io |
| (x2) | openshift-storage |
kubelet |
vg-manager-5rvk7 |
Pulled |
Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine |
| (x2) | openshift-storage |
kubelet |
vg-manager-5rvk7 |
Pulled |
Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine |
| (x2) | openshift-storage |
kubelet |
vg-manager-5rvk7 |
Created |
Created container: vg-manager |
| (x2) | openshift-storage |
kubelet |
vg-manager-5rvk7 |
Created |
Created container: vg-manager |
| (x2) | openshift-storage |
kubelet |
vg-manager-5rvk7 |
Started |
Started container vg-manager |
| (x2) | openshift-storage |
kubelet |
vg-manager-5rvk7 |
Started |
Started container vg-manager |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openstack namespace | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openstack-operators namespace | |
openstack-operators |
kubelet |
openstack-operator-index-gzfb5 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" | |
openstack-operators |
multus |
openstack-operator-index-gzfb5 |
AddedInterface |
Add eth0 [10.128.0.145/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-index-gzfb5 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" | |
openstack-operators |
multus |
openstack-operator-index-gzfb5 |
AddedInterface |
Add eth0 [10.128.0.145/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-index-gzfb5 |
Created |
Created container: registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-gzfb5 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 1.39s (1.39s including waiting). Image size: 918506146 bytes. | |
openstack-operators |
kubelet |
openstack-operator-index-gzfb5 |
Created |
Created container: registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-gzfb5 |
Started |
Started container registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-gzfb5 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 1.39s (1.39s including waiting). Image size: 918506146 bytes. | |
openstack-operators |
kubelet |
openstack-operator-index-gzfb5 |
Started |
Started container registry-server | |
| (x9) | default |
operator-lifecycle-manager |
openstack-operators |
ResolutionFailed |
error using catalogsource openstack-operators/openstack-operator-index: no registry client established for catalogsource openstack-operators/openstack-operator-index |
openstack-operators |
kubelet |
openstack-operator-index-gzfb5 |
Killing |
Stopping container registry-server | |
openstack-operators |
multus |
openstack-operator-index-chx5x |
AddedInterface |
Add eth0 [10.128.0.146/23] from ovn-kubernetes | |
openstack-operators |
multus |
openstack-operator-index-chx5x |
AddedInterface |
Add eth0 [10.128.0.146/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-index-chx5x |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" | |
openstack-operators |
kubelet |
openstack-operator-index-gzfb5 |
Killing |
Stopping container registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-chx5x |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" | |
openstack-operators |
kubelet |
openstack-operator-index-chx5x |
Created |
Created container: registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-chx5x |
Started |
Started container registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-chx5x |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 405ms (405ms including waiting). Image size: 918506146 bytes. | |
openstack-operators |
kubelet |
openstack-operator-index-chx5x |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 405ms (405ms including waiting). Image size: 918506146 bytes. | |
openstack-operators |
kubelet |
openstack-operator-index-chx5x |
Started |
Started container registry-server | |
openstack-operators |
kubelet |
openstack-operator-index-chx5x |
Created |
Created container: registry-server | |
default |
operator-lifecycle-manager |
openstack-operators |
ResolutionFailed |
error using catalogsource openstack-operators/openstack-operator-index: failed to list bundles: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 172.30.77.94:50051: connect: connection refused" | |
openstack-operators |
job-controller |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c2134432 |
SuccessfulCreate |
Created pod: 4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p | |
openstack-operators |
job-controller |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c2134432 |
SuccessfulCreate |
Created pod: 4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p | |
openstack-operators |
multus |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
AddedInterface |
Add eth0 [10.128.0.147/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Created |
Created container: util | |
openstack-operators |
multus |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
AddedInterface |
Add eth0 [10.128.0.147/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Created |
Created container: util | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Started |
Started container util | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Started |
Started container util | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:aa980a9183a9d6b486341fafb14196305ef737d7" | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:aa980a9183a9d6b486341fafb14196305ef737d7" in 706ms (706ms including waiting). Image size: 115772 bytes. | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:aa980a9183a9d6b486341fafb14196305ef737d7" in 706ms (706ms including waiting). Image size: 115772 bytes. | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:aa980a9183a9d6b486341fafb14196305ef737d7" | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Created |
Created container: pull | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Started |
Started container extract | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Started |
Started container extract | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Created |
Created container: extract | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Created |
Created container: extract | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Started |
Started container pull | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Created |
Created container: pull | |
openstack-operators |
kubelet |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21lwt5p |
Started |
Started container pull | |
openstack-operators |
job-controller |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c2134432 |
Completed |
Job completed | |
openstack-operators |
job-controller |
4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c2134432 |
Completed |
Job completed | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
AllRequirementsMet |
all requirements found, attempting install | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-init to become ready: waiting for spec update of deployment "openstack-operator-controller-init" to be observed... | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
waiting for install components to report healthy | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-init to become ready: waiting for spec update of deployment "openstack-operator-controller-init" to be observed... | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
AllRequirementsMet |
all requirements found, attempting install | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-init-7f8db498b4 |
SuccessfulCreate |
Created pod: openstack-operator-controller-init-7f8db498b4-66blt | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
waiting for install components to report healthy | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
RequirementsUnknown |
requirements not yet checked | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
RequirementsNotMet |
one or more requirements couldn't be found | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
RequirementsUnknown |
requirements not yet checked | |
openstack-operators |
deployment-controller |
openstack-operator-controller-init |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-init-7f8db498b4 to 1 | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-init-7f8db498b4 |
SuccessfulCreate |
Created pod: openstack-operator-controller-init-7f8db498b4-66blt | |
openstack-operators |
deployment-controller |
openstack-operator-controller-init |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-init-7f8db498b4 to 1 | |
| (x2) | openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability. |
openstack-operators |
multus |
openstack-operator-controller-init-7f8db498b4-66blt |
AddedInterface |
Add eth0 [10.128.0.148/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-controller-init-7f8db498b4-66blt |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7" | |
openstack-operators |
multus |
openstack-operator-controller-init-7f8db498b4-66blt |
AddedInterface |
Add eth0 [10.128.0.148/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-operator-controller-init-7f8db498b4-66blt |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7" | |
| (x2) | openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallWaiting |
installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability. |
openstack-operators |
kubelet |
openstack-operator-controller-init-7f8db498b4-66blt |
Created |
Created container: operator | |
openstack-operators |
kubelet |
openstack-operator-controller-init-7f8db498b4-66blt |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7" in 4.208s (4.208s including waiting). Image size: 293229897 bytes. | |
openstack-operators |
kubelet |
openstack-operator-controller-init-7f8db498b4-66blt |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7" in 4.208s (4.208s including waiting). Image size: 293229897 bytes. | |
openstack-operators |
kubelet |
openstack-operator-controller-init-7f8db498b4-66blt |
Started |
Started container operator | |
openstack-operators |
kubelet |
openstack-operator-controller-init-7f8db498b4-66blt |
Started |
Started container operator | |
openstack-operators |
kubelet |
openstack-operator-controller-init-7f8db498b4-66blt |
Created |
Created container: operator | |
openstack-operators |
openstack-operator-controller-init-7f8db498b4-66blt_cf07bdef-1d2e-4ecb-a8fb-ca52b856edce |
20ca801f.openstack.org |
LeaderElection |
openstack-operator-controller-init-7f8db498b4-66blt_cf07bdef-1d2e-4ecb-a8fb-ca52b856edce became leader | |
openstack-operators |
openstack-operator-controller-init-7f8db498b4-66blt_cf07bdef-1d2e-4ecb-a8fb-ca52b856edce |
20ca801f.openstack.org |
LeaderElection |
openstack-operator-controller-init-7f8db498b4-66blt_cf07bdef-1d2e-4ecb-a8fb-ca52b856edce became leader | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
install strategy completed with no errors | |
openstack-operators |
operator-lifecycle-manager |
openstack-operator.v0.6.0 |
InstallSucceeded |
install strategy completed with no errors | |
openstack-operators |
cert-manager-certificates-trigger |
barbican-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
barbican-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
barbican-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-vdgcn" | |
openstack-operators |
cert-manager-certificates-key-manager |
barbican-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-vdgcn" | |
openstack-operators |
cert-manager-certificaterequests-approver |
designate-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-key-manager |
cinder-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-2nq2s" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
barbican-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
barbican-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
barbican-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-request-manager |
cinder-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "cinder-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
barbican-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-key-manager |
cinder-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-2nq2s" | |
openstack-operators |
cert-manager-certificates-trigger |
cinder-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-request-manager |
barbican-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "barbican-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
barbican-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
cinder-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
cinder-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
cinder-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
cinder-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "cinder-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
barbican-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "barbican-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-trigger |
glance-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-trigger |
designate-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
designate-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "designate-operator-metrics-certs-whrhg" | |
openstack-operators |
cert-manager-certificates-request-manager |
designate-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "designate-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-issuing |
barbican-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
designate-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
designate-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
designate-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-trigger |
designate-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
designate-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "designate-operator-metrics-certs-whrhg" | |
openstack-operators |
cert-manager-certificates-request-manager |
designate-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "designate-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
cinder-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
glance-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
glance-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "glance-operator-metrics-certs-zc7kl" | |
openstack-operators |
cert-manager-certificates-trigger |
ironic-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
glance-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "glance-operator-metrics-certs-zc7kl" | |
openstack-operators |
cert-manager-certificates-trigger |
heat-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
heat-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
ironic-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
neutron-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-issuing |
cinder-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-trigger |
horizon-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
neutron-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
horizon-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
ironic-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-ffvz5" | |
openstack-operators |
cert-manager-certificates-issuing |
cinder-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
ironic-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-ffvz5" | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-issuing |
designate-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
designate-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
heat-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "heat-operator-metrics-certs-kk2jl" | |
openstack-operators |
cert-manager-certificates-trigger |
manila-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
manila-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
nova-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
keystone-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
keystone-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-request-manager |
glance-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "glance-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
glance-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "glance-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-trigger |
nova-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
heat-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "heat-operator-metrics-certs-kk2jl" | |
openstack-operators |
cert-manager-certificates-key-manager |
horizon-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-tvv9j" | |
openstack-operators |
cert-manager-certificates-key-manager |
horizon-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-tvv9j" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
glance-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
mariadb-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-approver |
glance-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-trigger |
ovn-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
ovn-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
glance-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-trigger |
mariadb-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-approver |
glance-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
neutron-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-vtldd" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
heat-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "heat-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-jg6h4" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
neutron-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-vtldd" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "infra-operator-metrics-certs-xbhh9" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
heat-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
octavia-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
octavia-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-request-manager |
horizon-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "horizon-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
horizon-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "horizon-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "infra-operator-metrics-certs-xbhh9" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
placement-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
horizon-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
placement-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-request-manager |
heat-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "heat-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-jg6h4" | |
openstack-operators |
replicaset-controller |
heat-operator-controller-manager-69f49c598c |
SuccessfulCreate |
Created pod: heat-operator-controller-manager-69f49c598c-ngkpp | |
openstack-operators |
deployment-controller |
glance-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set glance-operator-controller-manager-77987464f4 to 1 | |
openstack-operators |
replicaset-controller |
mariadb-operator-controller-manager-6994f66f48 |
SuccessfulCreate |
Created pod: mariadb-operator-controller-manager-6994f66f48-dgqgn | |
openstack-operators |
deployment-controller |
mariadb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set mariadb-operator-controller-manager-6994f66f48 to 1 | |
openstack-operators |
deployment-controller |
placement-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set placement-operator-controller-manager-8497b45c89 to 1 | |
openstack-operators |
deployment-controller |
openstack-baremetal-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-baremetal-operator-controller-manager-5f8cd6b89b to 1 | |
openstack-operators |
cert-manager-certificates-key-manager |
ovn-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-8v4h2" | |
openstack-operators |
replicaset-controller |
openstack-baremetal-operator-controller-manager-5f8cd6b89b |
SuccessfulCreate |
Created pod: openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn | |
openstack-operators |
deployment-controller |
octavia-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set octavia-operator-controller-manager-69f8888797 to 1 | |
openstack-operators |
deployment-controller |
keystone-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set keystone-operator-controller-manager-b4d948c87 to 1 | |
openstack-operators |
deployment-controller |
ironic-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ironic-operator-controller-manager-554564d7fc to 1 | |
openstack-operators |
replicaset-controller |
ironic-operator-controller-manager-554564d7fc |
SuccessfulCreate |
Created pod: ironic-operator-controller-manager-554564d7fc-x78p9 | |
openstack-operators |
cert-manager-certificates-key-manager |
keystone-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-qsgpc" | |
openstack-operators |
replicaset-controller |
manila-operator-controller-manager-54f6768c69 |
SuccessfulCreate |
Created pod: manila-operator-controller-manager-54f6768c69-fnw4p | |
openstack-operators |
deployment-controller |
manila-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set manila-operator-controller-manager-54f6768c69 to 1 | |
openstack-operators |
replicaset-controller |
openstack-baremetal-operator-controller-manager-5f8cd6b89b |
SuccessfulCreate |
Created pod: openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn | |
openstack-operators |
replicaset-controller |
octavia-operator-controller-manager-69f8888797 |
SuccessfulCreate |
Created pod: octavia-operator-controller-manager-69f8888797-6sx67 | |
openstack-operators |
replicaset-controller |
barbican-operator-controller-manager-868647ff47 |
SuccessfulCreate |
Created pod: barbican-operator-controller-manager-868647ff47-58dhd | |
openstack-operators |
deployment-controller |
barbican-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set barbican-operator-controller-manager-868647ff47 to 1 | |
openstack-operators |
deployment-controller |
openstack-baremetal-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-baremetal-operator-controller-manager-5f8cd6b89b to 1 | |
openstack-operators |
replicaset-controller |
neutron-operator-controller-manager-64ddbf8bb |
SuccessfulCreate |
Created pod: neutron-operator-controller-manager-64ddbf8bb-5mtgr | |
openstack-operators |
deployment-controller |
neutron-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set neutron-operator-controller-manager-64ddbf8bb to 1 | |
openstack-operators |
deployment-controller |
octavia-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set octavia-operator-controller-manager-69f8888797 to 1 | |
openstack-operators |
deployment-controller |
nova-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set nova-operator-controller-manager-567668f5cf to 1 | |
openstack-operators |
replicaset-controller |
octavia-operator-controller-manager-69f8888797 |
SuccessfulCreate |
Created pod: octavia-operator-controller-manager-69f8888797-6sx67 | |
openstack-operators |
deployment-controller |
designate-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set designate-operator-controller-manager-6d8bf5c495 to 1 | |
openstack-operators |
replicaset-controller |
nova-operator-controller-manager-567668f5cf |
SuccessfulCreate |
Created pod: nova-operator-controller-manager-567668f5cf-2td54 | |
openstack-operators |
deployment-controller |
ovn-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ovn-operator-controller-manager-d44cf6b75 to 1 | |
openstack-operators |
replicaset-controller |
ovn-operator-controller-manager-d44cf6b75 |
SuccessfulCreate |
Created pod: ovn-operator-controller-manager-d44cf6b75-gwh4x | |
openstack-operators |
replicaset-controller |
barbican-operator-controller-manager-868647ff47 |
SuccessfulCreate |
Created pod: barbican-operator-controller-manager-868647ff47-58dhd | |
openstack-operators |
deployment-controller |
barbican-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set barbican-operator-controller-manager-868647ff47 to 1 | |
openstack-operators |
deployment-controller |
neutron-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set neutron-operator-controller-manager-64ddbf8bb to 1 | |
openstack-operators |
replicaset-controller |
neutron-operator-controller-manager-64ddbf8bb |
SuccessfulCreate |
Created pod: neutron-operator-controller-manager-64ddbf8bb-5mtgr | |
openstack-operators |
replicaset-controller |
glance-operator-controller-manager-77987464f4 |
SuccessfulCreate |
Created pod: glance-operator-controller-manager-77987464f4-sqmnn | |
openstack-operators |
deployment-controller |
glance-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set glance-operator-controller-manager-77987464f4 to 1 | |
openstack-operators |
deployment-controller |
mariadb-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set mariadb-operator-controller-manager-6994f66f48 to 1 | |
openstack-operators |
replicaset-controller |
mariadb-operator-controller-manager-6994f66f48 |
SuccessfulCreate |
Created pod: mariadb-operator-controller-manager-6994f66f48-dgqgn | |
openstack-operators |
replicaset-controller |
cinder-operator-controller-manager-5d946d989d |
SuccessfulCreate |
Created pod: cinder-operator-controller-manager-5d946d989d-6mnh8 | |
openstack-operators |
deployment-controller |
cinder-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set cinder-operator-controller-manager-5d946d989d to 1 | |
openstack-operators |
replicaset-controller |
cinder-operator-controller-manager-5d946d989d |
SuccessfulCreate |
Created pod: cinder-operator-controller-manager-5d946d989d-6mnh8 | |
openstack-operators |
deployment-controller |
cinder-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set cinder-operator-controller-manager-5d946d989d to 1 | |
openstack-operators |
replicaset-controller |
heat-operator-controller-manager-69f49c598c |
SuccessfulCreate |
Created pod: heat-operator-controller-manager-69f49c598c-ngkpp | |
openstack-operators |
deployment-controller |
heat-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set heat-operator-controller-manager-69f49c598c to 1 | |
openstack-operators |
cert-manager-certificaterequests-approver |
heat-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
deployment-controller |
infra-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set infra-operator-controller-manager-5f879c76b6 to 1 | |
openstack-operators |
deployment-controller |
manila-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set manila-operator-controller-manager-54f6768c69 to 1 | |
openstack-operators |
replicaset-controller |
manila-operator-controller-manager-54f6768c69 |
SuccessfulCreate |
Created pod: manila-operator-controller-manager-54f6768c69-fnw4p | |
openstack-operators |
replicaset-controller |
designate-operator-controller-manager-6d8bf5c495 |
SuccessfulCreate |
Created pod: designate-operator-controller-manager-6d8bf5c495-nn59f | |
openstack-operators |
deployment-controller |
designate-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set designate-operator-controller-manager-6d8bf5c495 to 1 | |
openstack-operators |
replicaset-controller |
nova-operator-controller-manager-567668f5cf |
SuccessfulCreate |
Created pod: nova-operator-controller-manager-567668f5cf-2td54 | |
openstack-operators |
deployment-controller |
nova-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set nova-operator-controller-manager-567668f5cf to 1 | |
openstack-operators |
cert-manager-certificates-key-manager |
keystone-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-qsgpc" | |
openstack-operators |
replicaset-controller |
ovn-operator-controller-manager-d44cf6b75 |
SuccessfulCreate |
Created pod: ovn-operator-controller-manager-d44cf6b75-gwh4x | |
openstack-operators |
deployment-controller |
keystone-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set keystone-operator-controller-manager-b4d948c87 to 1 | |
openstack-operators |
replicaset-controller |
keystone-operator-controller-manager-b4d948c87 |
SuccessfulCreate |
Created pod: keystone-operator-controller-manager-b4d948c87-xnzn6 | |
openstack-operators |
deployment-controller |
ironic-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ironic-operator-controller-manager-554564d7fc to 1 | |
openstack-operators |
replicaset-controller |
ironic-operator-controller-manager-554564d7fc |
SuccessfulCreate |
Created pod: ironic-operator-controller-manager-554564d7fc-x78p9 | |
openstack-operators |
deployment-controller |
ovn-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set ovn-operator-controller-manager-d44cf6b75 to 1 | |
openstack-operators |
replicaset-controller |
glance-operator-controller-manager-77987464f4 |
SuccessfulCreate |
Created pod: glance-operator-controller-manager-77987464f4-sqmnn | |
openstack-operators |
replicaset-controller |
keystone-operator-controller-manager-b4d948c87 |
SuccessfulCreate |
Created pod: keystone-operator-controller-manager-b4d948c87-xnzn6 | |
openstack-operators |
cert-manager-certificates-key-manager |
ovn-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-8v4h2" | |
openstack-operators |
deployment-controller |
infra-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set infra-operator-controller-manager-5f879c76b6 to 1 | |
openstack-operators |
replicaset-controller |
infra-operator-controller-manager-5f879c76b6 |
SuccessfulCreate |
Created pod: infra-operator-controller-manager-5f879c76b6-2x4ww | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
horizon-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
deployment-controller |
heat-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set heat-operator-controller-manager-69f49c598c to 1 | |
openstack-operators |
cert-manager-certificaterequests-approver |
heat-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
replicaset-controller |
infra-operator-controller-manager-5f879c76b6 |
SuccessfulCreate |
Created pod: infra-operator-controller-manager-5f879c76b6-2x4ww | |
openstack-operators |
deployment-controller |
placement-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set placement-operator-controller-manager-8497b45c89 to 1 | |
openstack-operators |
deployment-controller |
horizon-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set horizon-operator-controller-manager-5b9b8895d5 to 1 | |
openstack-operators |
replicaset-controller |
horizon-operator-controller-manager-5b9b8895d5 |
SuccessfulCreate |
Created pod: horizon-operator-controller-manager-5b9b8895d5-2wdk9 | |
openstack-operators |
replicaset-controller |
designate-operator-controller-manager-6d8bf5c495 |
SuccessfulCreate |
Created pod: designate-operator-controller-manager-6d8bf5c495-nn59f | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
replicaset-controller |
horizon-operator-controller-manager-5b9b8895d5 |
SuccessfulCreate |
Created pod: horizon-operator-controller-manager-5b9b8895d5-2wdk9 | |
openstack-operators |
deployment-controller |
horizon-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set horizon-operator-controller-manager-5b9b8895d5 to 1 | |
openstack-operators |
cert-manager-certificaterequests-approver |
horizon-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
horizon-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
heat-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-manager-74d597bfd6 |
SuccessfulCreate |
Created pod: openstack-operator-controller-manager-74d597bfd6-98qgl | |
openstack-operators |
deployment-controller |
openstack-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-manager-74d597bfd6 to 1 | |
openstack-operators |
deployment-controller |
rabbitmq-cluster-operator-manager |
ScalingReplicaSet |
Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1 | |
openstack-operators |
multus |
designate-operator-controller-manager-6d8bf5c495-nn59f |
AddedInterface |
Add eth0 [10.128.0.151/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-nn59f |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" | |
openstack-operators |
replicaset-controller |
rabbitmq-cluster-operator-manager-668c99d594 |
SuccessfulCreate |
Created pod: rabbitmq-cluster-operator-manager-668c99d594-hqlr5 | |
openstack-operators |
replicaset-controller |
placement-operator-controller-manager-8497b45c89 |
SuccessfulCreate |
Created pod: placement-operator-controller-manager-8497b45c89-dbcqg | |
openstack-operators |
deployment-controller |
telemetry-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set telemetry-operator-controller-manager-7f45b4ff68 to 1 | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-nn59f |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" | |
openstack-operators |
multus |
designate-operator-controller-manager-6d8bf5c495-nn59f |
AddedInterface |
Add eth0 [10.128.0.151/23] from ovn-kubernetes | |
openstack-operators |
replicaset-controller |
telemetry-operator-controller-manager-7f45b4ff68 |
SuccessfulCreate |
Created pod: telemetry-operator-controller-manager-7f45b4ff68-wk82b | |
openstack-operators |
replicaset-controller |
swift-operator-controller-manager-68f46476f |
SuccessfulCreate |
Created pod: swift-operator-controller-manager-68f46476f-zdksg | |
openstack-operators |
deployment-controller |
swift-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set swift-operator-controller-manager-68f46476f to 1 | |
openstack-operators |
replicaset-controller |
telemetry-operator-controller-manager-7f45b4ff68 |
SuccessfulCreate |
Created pod: telemetry-operator-controller-manager-7f45b4ff68-wk82b | |
openstack-operators |
deployment-controller |
telemetry-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set telemetry-operator-controller-manager-7f45b4ff68 to 1 | |
openstack-operators |
replicaset-controller |
test-operator-controller-manager-7866795846 |
SuccessfulCreate |
Created pod: test-operator-controller-manager-7866795846-2vx66 | |
openstack-operators |
deployment-controller |
test-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set test-operator-controller-manager-7866795846 to 1 | |
openstack-operators |
multus |
cinder-operator-controller-manager-5d946d989d-6mnh8 |
AddedInterface |
Add eth0 [10.128.0.149/23] from ovn-kubernetes | |
openstack-operators |
multus |
cinder-operator-controller-manager-5d946d989d-6mnh8 |
AddedInterface |
Add eth0 [10.128.0.149/23] from ovn-kubernetes | |
openstack-operators |
replicaset-controller |
test-operator-controller-manager-7866795846 |
SuccessfulCreate |
Created pod: test-operator-controller-manager-7866795846-2vx66 | |
openstack-operators |
deployment-controller |
test-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set test-operator-controller-manager-7866795846 to 1 | |
openstack-operators |
replicaset-controller |
watcher-operator-controller-manager-5db88f68c |
SuccessfulCreate |
Created pod: watcher-operator-controller-manager-5db88f68c-ctk27 | |
openstack-operators |
deployment-controller |
watcher-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set watcher-operator-controller-manager-5db88f68c to 1 | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-58dhd |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" | |
openstack-operators |
multus |
barbican-operator-controller-manager-868647ff47-58dhd |
AddedInterface |
Add eth0 [10.128.0.150/23] from ovn-kubernetes | |
openstack-operators |
deployment-controller |
swift-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set swift-operator-controller-manager-68f46476f to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
watcher-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
replicaset-controller |
swift-operator-controller-manager-68f46476f |
SuccessfulCreate |
Created pod: swift-operator-controller-manager-68f46476f-zdksg | |
openstack-operators |
replicaset-controller |
watcher-operator-controller-manager-5db88f68c |
SuccessfulCreate |
Created pod: watcher-operator-controller-manager-5db88f68c-ctk27 | |
openstack-operators |
deployment-controller |
watcher-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set watcher-operator-controller-manager-5db88f68c to 1 | |
openstack-operators |
cert-manager-certificates-trigger |
watcher-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-58dhd |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" | |
openstack-operators |
multus |
barbican-operator-controller-manager-868647ff47-58dhd |
AddedInterface |
Add eth0 [10.128.0.150/23] from ovn-kubernetes | |
openstack-operators |
deployment-controller |
rabbitmq-cluster-operator-manager |
ScalingReplicaSet |
Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1 | |
openstack-operators |
replicaset-controller |
rabbitmq-cluster-operator-manager-668c99d594 |
SuccessfulCreate |
Created pod: rabbitmq-cluster-operator-manager-668c99d594-hqlr5 | |
openstack-operators |
deployment-controller |
openstack-operator-controller-manager |
ScalingReplicaSet |
Scaled up replica set openstack-operator-controller-manager-74d597bfd6 to 1 | |
openstack-operators |
replicaset-controller |
openstack-operator-controller-manager-74d597bfd6 |
SuccessfulCreate |
Created pod: openstack-operator-controller-manager-74d597bfd6-98qgl | |
openstack-operators |
replicaset-controller |
placement-operator-controller-manager-8497b45c89 |
SuccessfulCreate |
Created pod: placement-operator-controller-manager-8497b45c89-dbcqg | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
swift-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-key-manager |
mariadb-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-gdmgl" | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-5mtgr |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" | |
openstack-operators |
multus |
neutron-operator-controller-manager-64ddbf8bb-5mtgr |
AddedInterface |
Add eth0 [10.128.0.160/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-key-manager |
mariadb-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-gdmgl" | |
openstack-operators |
cert-manager-certificates-trigger |
swift-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
keystone-operator-controller-manager-b4d948c87-xnzn6 |
AddedInterface |
Add eth0 [10.128.0.157/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-xnzn6 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
neutron-operator-controller-manager-64ddbf8bb-5mtgr |
AddedInterface |
Add eth0 [10.128.0.160/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-6mnh8 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-6mnh8 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-dgqgn |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
mariadb-operator-controller-manager-6994f66f48-dgqgn |
AddedInterface |
Add eth0 [10.128.0.159/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-issuing |
glance-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
multus |
heat-operator-controller-manager-69f49c598c-ngkpp |
AddedInterface |
Add eth0 [10.128.0.153/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-ngkpp |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" | |
openstack-operators |
cert-manager-certificates-key-manager |
manila-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "manila-operator-metrics-certs-z4nvk" | |
openstack-operators |
cert-manager-certificates-request-manager |
keystone-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "keystone-operator-metrics-certs-1" | |
openstack-operators |
multus |
nova-operator-controller-manager-567668f5cf-2td54 |
AddedInterface |
Add eth0 [10.128.0.161/23] from ovn-kubernetes | |
openstack-operators |
multus |
manila-operator-controller-manager-54f6768c69-fnw4p |
AddedInterface |
Add eth0 [10.128.0.158/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-fnw4p |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" | |
openstack-operators |
cert-manager-certificates-key-manager |
octavia-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-mblj7" | |
openstack-operators |
multus |
nova-operator-controller-manager-567668f5cf-2td54 |
AddedInterface |
Add eth0 [10.128.0.161/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-2td54 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-fnw4p |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" | |
openstack-operators |
multus |
manila-operator-controller-manager-54f6768c69-fnw4p |
AddedInterface |
Add eth0 [10.128.0.158/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-2td54 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-x78p9 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" | |
openstack-operators |
cert-manager-certificates-request-manager |
keystone-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "keystone-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
ironic-operator-controller-manager-554564d7fc-x78p9 |
AddedInterface |
Add eth0 [10.128.0.156/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
octavia-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-mblj7" | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "infra-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
manila-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "manila-operator-metrics-certs-z4nvk" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
mariadb-operator-controller-manager-6994f66f48-dgqgn |
AddedInterface |
Add eth0 [10.128.0.159/23] from ovn-kubernetes | |
openstack-operators |
multus |
glance-operator-controller-manager-77987464f4-sqmnn |
AddedInterface |
Add eth0 [10.128.0.152/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-sqmnn |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-dgqgn |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-sqmnn |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-xnzn6 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" | |
openstack-operators |
multus |
keystone-operator-controller-manager-b4d948c87-xnzn6 |
AddedInterface |
Add eth0 [10.128.0.157/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-x78p9 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" | |
openstack-operators |
multus |
ironic-operator-controller-manager-554564d7fc-x78p9 |
AddedInterface |
Add eth0 [10.128.0.156/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "infra-operator-metrics-certs-1" | |
openstack-operators |
multus |
glance-operator-controller-manager-77987464f4-sqmnn |
AddedInterface |
Add eth0 [10.128.0.152/23] from ovn-kubernetes | |
openstack-operators |
multus |
horizon-operator-controller-manager-5b9b8895d5-2wdk9 |
AddedInterface |
Add eth0 [10.128.0.154/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-2wdk9 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
nova-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "nova-operator-metrics-certs-4v46p" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
placement-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "placement-operator-metrics-certs-frl8q" | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-5mtgr |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-2wdk9 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" | |
openstack-operators |
cert-manager-certificates-key-manager |
placement-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "placement-operator-metrics-certs-frl8q" | |
openstack-operators |
cert-manager-certificates-issuing |
glance-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-key-manager |
nova-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "nova-operator-metrics-certs-4v46p" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
horizon-operator-controller-manager-5b9b8895d5-2wdk9 |
AddedInterface |
Add eth0 [10.128.0.154/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
multus |
heat-operator-controller-manager-69f49c598c-ngkpp |
AddedInterface |
Add eth0 [10.128.0.153/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-ngkpp |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-hqlr5 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" | |
openstack-operators |
multus |
rabbitmq-cluster-operator-manager-668c99d594-hqlr5 |
AddedInterface |
Add eth0 [10.128.0.171/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-ctk27 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" | |
openstack-operators |
multus |
watcher-operator-controller-manager-5db88f68c-ctk27 |
AddedInterface |
Add eth0 [10.128.0.169/23] from ovn-kubernetes | |
openstack-operators |
multus |
placement-operator-controller-manager-8497b45c89-dbcqg |
AddedInterface |
Add eth0 [10.128.0.165/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-dbcqg |
Failed |
Error: ErrImagePull | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-dbcqg |
Failed |
Failed to pull image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd": pull QPS exceeded | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-dbcqg |
Failed |
Error: ErrImagePull | |
openstack-operators |
cert-manager-certificates-trigger |
telemetry-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-dbcqg |
Failed |
Failed to pull image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd": pull QPS exceeded | |
openstack-operators |
multus |
placement-operator-controller-manager-8497b45c89-dbcqg |
AddedInterface |
Add eth0 [10.128.0.165/23] from ovn-kubernetes | |
openstack-operators |
multus |
test-operator-controller-manager-7866795846-2vx66 |
AddedInterface |
Add eth0 [10.128.0.168/23] from ovn-kubernetes | |
openstack-operators |
multus |
swift-operator-controller-manager-68f46476f-zdksg |
AddedInterface |
Add eth0 [10.128.0.166/23] from ovn-kubernetes | |
openstack-operators |
multus |
octavia-operator-controller-manager-69f8888797-6sx67 |
AddedInterface |
Add eth0 [10.128.0.162/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-zdksg |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-wk82b |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" | |
openstack-operators |
multus |
rabbitmq-cluster-operator-manager-668c99d594-hqlr5 |
AddedInterface |
Add eth0 [10.128.0.171/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-hqlr5 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
multus |
watcher-operator-controller-manager-5db88f68c-ctk27 |
AddedInterface |
Add eth0 [10.128.0.169/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-ctk27 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-2vx66 |
Failed |
Failed to pull image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6": pull QPS exceeded | |
openstack-operators |
multus |
swift-operator-controller-manager-68f46476f-zdksg |
AddedInterface |
Add eth0 [10.128.0.166/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-zdksg |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" | |
openstack-operators |
cert-manager-certificaterequests-approver |
keystone-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
multus |
octavia-operator-controller-manager-69f8888797-6sx67 |
AddedInterface |
Add eth0 [10.128.0.162/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-2vx66 |
Failed |
Error: ErrImagePull | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-2vx66 |
Failed |
Error: ErrImagePull | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-2vx66 |
Failed |
Failed to pull image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6": pull QPS exceeded | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-6sx67 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
multus |
test-operator-controller-manager-7866795846-2vx66 |
AddedInterface |
Add eth0 [10.128.0.168/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
Failed |
Error: ErrImagePull | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
Failed |
Failed to pull image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759": pull QPS exceeded | |
openstack-operators |
multus |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
AddedInterface |
Add eth0 [10.128.0.164/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificates-trigger |
telemetry-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-6sx67 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
keystone-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-wk82b |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" | |
openstack-operators |
multus |
telemetry-operator-controller-manager-7f45b4ff68-wk82b |
AddedInterface |
Add eth0 [10.128.0.167/23] from ovn-kubernetes | |
openstack-operators |
multus |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
AddedInterface |
Add eth0 [10.128.0.164/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
Failed |
Failed to pull image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759": pull QPS exceeded | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
Failed |
Error: ErrImagePull | |
openstack-operators |
multus |
telemetry-operator-controller-manager-7f45b4ff68-wk82b |
AddedInterface |
Add eth0 [10.128.0.167/23] from ovn-kubernetes | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
nova-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "nova-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
BackOff |
Back-off pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-trigger |
test-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
infra-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
Failed |
Error: ImagePullBackOff |
| (x2) | openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-dbcqg |
Failed |
Error: ImagePullBackOff |
| (x2) | openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
BackOff |
Back-off pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" |
openstack-operators |
cert-manager-certificates-issuing |
horizon-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-request-manager |
nova-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "nova-operator-metrics-certs-1" | |
| (x2) | openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-dbcqg |
BackOff |
Back-off pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" |
| (x2) | openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-2vx66 |
Failed |
Error: ImagePullBackOff |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
| (x2) | openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-dbcqg |
BackOff |
Back-off pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" |
| (x2) | openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-dbcqg |
Failed |
Error: ImagePullBackOff |
openstack-operators |
cert-manager-certificates-issuing |
heat-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-trigger |
test-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-2vx66 |
BackOff |
Back-off pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" |
openstack-operators |
cert-manager-certificates-trigger |
openstack-baremetal-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-serving-cert |
Issuing |
Issuing certificate as Secret does not exist | |
openstack-operators |
cert-manager-certificates-trigger |
openstack-operator-metrics-certs |
Issuing |
Issuing certificate as Secret does not exist | |
| (x2) | openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
Failed |
Error: ImagePullBackOff |
| (x2) | openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-2vx66 |
BackOff |
Back-off pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
horizon-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
| (x2) | openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-2vx66 |
Failed |
Error: ImagePullBackOff |
openstack-operators |
cert-manager-certificates-issuing |
heat-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-key-manager |
telemetry-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-6g6sr" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
octavia-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "octavia-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
placement-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "placement-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
manila-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "manila-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-key-manager |
watcher-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-xmvxj" | |
openstack-operators |
cert-manager-certificates-request-manager |
octavia-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "octavia-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
nova-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-request-manager |
manila-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "manila-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
test-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "test-operator-metrics-certs-lpj5n" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
swift-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "swift-operator-metrics-certs-555qs" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
telemetry-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-6g6sr" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
ironic-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ironic-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
nova-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
nova-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-request-manager |
placement-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "placement-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
ironic-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ironic-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
mariadb-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "mariadb-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-key-manager |
watcher-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-xmvxj" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
manila-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
mariadb-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "mariadb-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
swift-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "swift-operator-metrics-certs-555qs" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
octavia-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
test-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "test-operator-metrics-certs-lpj5n" | |
openstack-operators |
cert-manager-certificaterequests-approver |
placement-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
placement-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
mariadb-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
ironic-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
mariadb-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
ironic-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
octavia-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
octavia-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
octavia-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
manila-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
mariadb-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
manila-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
manila-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
placement-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x5) | openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-2x4ww |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificates-request-manager |
telemetry-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "telemetry-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
watcher-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "watcher-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-issuing |
keystone-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
keystone-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x5) | openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-2x4ww |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "infra-operator-serving-cert-8cl6z" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
test-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
test-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "test-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
telemetry-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "telemetry-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
infra-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "infra-operator-serving-cert-8cl6z" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
watcher-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "watcher-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
swift-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "swift-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
test-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "test-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
swift-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "swift-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
test-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-7n6m5" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
neutron-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "neutron-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
swift-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-request-manager |
neutron-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "neutron-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
neutron-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
watcher-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
test-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-request-manager |
ovn-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ovn-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
watcher-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
ovn-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "ovn-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
watcher-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
ovn-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
test-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
swift-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-baremetal-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-7n6m5" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
swift-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
telemetry-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
telemetry-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-q5w4l" | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-serving-cert-f79zz" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
telemetry-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-serving-cert |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-serving-cert-f79zz" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-key-manager |
openstack-operator-metrics-certs |
Generated |
Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-q5w4l" | |
openstack-operators |
cert-manager-certificaterequests-approver |
neutron-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
neutron-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
ovn-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
octavia-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
manila-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
octavia-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-serving-cert |
Requested |
Created new CertificateRequest resource "infra-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
infra-operator-serving-cert |
Requested |
Created new CertificateRequest resource "infra-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-issuing |
placement-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
manila-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
infra-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
ovn-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificates-issuing |
placement-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-baremetal-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-baremetal-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
mariadb-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-metrics-certs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-metrics-certs |
Requested |
Created new CertificateRequest resource "openstack-operator-metrics-certs-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-ca |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-vault |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-approver |
infra-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
infra-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-acme |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-issuing |
mariadb-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificates-request-manager |
openstack-operator-serving-cert |
Requested |
Created new CertificateRequest resource "openstack-operator-serving-cert-1" | |
openstack-operators |
cert-manager-certificaterequests-issuer-venafi |
openstack-operator-serving-cert-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
ironic-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-baremetal-operator-serving-cert-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
BadConfig |
Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-baremetal-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-approver |
openstack-operator-metrics-certs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-serving-cert-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificaterequests-issuer-selfsigned |
openstack-operator-metrics-certs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack-operators |
cert-manager-certificates-issuing |
ironic-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
test-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
swift-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
swift-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
test-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
telemetry-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
ovn-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-nn59f |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" in 14.886s (14.886s including waiting). Image size: 195315176 bytes. | |
openstack-operators |
cert-manager-certificates-issuing |
ovn-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-nn59f |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" in 14.886s (14.886s including waiting). Image size: 195315176 bytes. | |
openstack-operators |
cert-manager-certificates-issuing |
telemetry-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
| (x2) | openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-dbcqg |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" |
openstack-operators |
cert-manager-certificates-issuing |
neutron-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
| (x6) | openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found |
| (x2) | openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-dbcqg |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" |
| (x6) | openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-98qgl |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found |
| (x6) | openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found |
| (x6) | openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-98qgl |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found |
| (x6) | openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-98qgl |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found |
openstack-operators |
cert-manager-certificates-issuing |
nova-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
| (x6) | openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-98qgl |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found |
openstack-operators |
cert-manager-certificates-issuing |
nova-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
infra-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-dgqgn |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" in 15.357s (15.357s including waiting). Image size: 189413585 bytes. | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-dgqgn |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" in 15.357s (15.357s including waiting). Image size: 189413585 bytes. | |
openstack-operators |
cert-manager-certificates-issuing |
neutron-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
| (x2) | openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-2vx66 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
| (x2) | openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" |
| (x2) | openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-2vx66 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" |
| (x2) | openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-5mtgr |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" in 16.115s (16.115s including waiting). Image size: 191026634 bytes. | |
openstack-operators |
cert-manager-certificates-issuing |
watcher-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-fnw4p |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" in 16.92s (16.92s including waiting). Image size: 191246785 bytes. | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-fnw4p |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" in 16.92s (16.92s including waiting). Image size: 191246785 bytes. | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-2wdk9 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" in 16.9s (16.9s including waiting). Image size: 190376908 bytes. | |
openstack-operators |
cert-manager-certificates-issuing |
watcher-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-5mtgr |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" in 16.115s (16.115s including waiting). Image size: 191026634 bytes. | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-baremetal-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-2wdk9 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" in 16.9s (16.9s including waiting). Image size: 190376908 bytes. | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-wk82b |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" in 16.597s (16.597s including waiting). Image size: 196099048 bytes. | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-x78p9 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" in 17.479s (17.479s including waiting). Image size: 191665087 bytes. | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-xnzn6 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" in 17.171s (17.171s including waiting). Image size: 193023123 bytes. | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-serving-cert |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-6sx67 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" in 16.598s (16.598s including waiting). Image size: 193556429 bytes. | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-ctk27 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" in 16.599s (16.599s including waiting). Image size: 190936525 bytes. | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-xnzn6 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" in 17.171s (17.171s including waiting). Image size: 193023123 bytes. | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-ngkpp |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" in 17.479s (17.479s including waiting). Image size: 191605671 bytes. | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-sqmnn |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" in 17.494s (17.494s including waiting). Image size: 191991231 bytes. | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-2td54 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" in 17.174s (17.174s including waiting). Image size: 193562469 bytes. | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-ctk27 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" in 16.599s (16.599s including waiting). Image size: 190936525 bytes. | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-sqmnn |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" in 17.494s (17.494s including waiting). Image size: 191991231 bytes. | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-6mnh8 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" in 18.089s (18.089s including waiting). Image size: 191425981 bytes. | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-6mnh8 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" in 18.089s (18.089s including waiting). Image size: 191425981 bytes. | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-58dhd |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" in 18.129s (18.129s including waiting). Image size: 191103449 bytes. | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-x78p9 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" in 17.479s (17.479s including waiting). Image size: 191665087 bytes. | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-6sx67 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" in 16.598s (16.598s including waiting). Image size: 193556429 bytes. | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-wk82b |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" in 16.597s (16.597s including waiting). Image size: 196099048 bytes. | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-ngkpp |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" in 17.479s (17.479s including waiting). Image size: 191605671 bytes. | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-2td54 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" in 17.174s (17.174s including waiting). Image size: 193562469 bytes. | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-58dhd |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" in 18.129s (18.129s including waiting). Image size: 191103449 bytes. | |
openstack-operators |
cert-manager-certificates-issuing |
openstack-operator-metrics-certs |
Issuing |
The certificate has been successfully issued | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-zdksg |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" in 16.597s (16.597s including waiting). Image size: 192091569 bytes. | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-zdksg |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" in 16.597s (16.597s including waiting). Image size: 192091569 bytes. | |
openstack-operators |
multus |
infra-operator-controller-manager-5f879c76b6-2x4ww |
AddedInterface |
Add eth0 [10.128.0.155/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" in 3.128s (3.128s including waiting). Image size: 190089624 bytes. | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-dbcqg |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" in 4.073s (4.073s including waiting). Image size: 190626789 bytes. | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-hqlr5 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 17.883s (17.883s including waiting). Image size: 176351298 bytes. | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-2x4ww |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-2vx66 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" in 3.081s (3.081s including waiting). Image size: 188905402 bytes. | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-2vx66 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" in 3.081s (3.081s including waiting). Image size: 188905402 bytes. | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-fnw4p |
Started |
Started container manager | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-fnw4p |
Created |
Created container: manager | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-dgqgn |
Started |
Started container manager | |
openstack-operators |
designate-operator-controller-manager-6d8bf5c495-nn59f_9da34341-5169-4515-80c4-ab4bac19044a |
f9497e05.openstack.org |
LeaderElection |
designate-operator-controller-manager-6d8bf5c495-nn59f_9da34341-5169-4515-80c4-ab4bac19044a became leader | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-nn59f |
Started |
Started container manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-nn59f |
Created |
Created container: manager | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-hqlr5 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 17.883s (17.883s including waiting). Image size: 176351298 bytes. | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-2x4ww |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" | |
openstack-operators |
multus |
infra-operator-controller-manager-5f879c76b6-2x4ww |
AddedInterface |
Add eth0 [10.128.0.155/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" in 3.128s (3.128s including waiting). Image size: 190089624 bytes. | |
openstack-operators |
designate-operator-controller-manager-6d8bf5c495-nn59f_9da34341-5169-4515-80c4-ab4bac19044a |
f9497e05.openstack.org |
LeaderElection |
designate-operator-controller-manager-6d8bf5c495-nn59f_9da34341-5169-4515-80c4-ab4bac19044a became leader | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-fnw4p |
Created |
Created container: manager | |
openstack-operators |
kubelet |
manila-operator-controller-manager-54f6768c69-fnw4p |
Started |
Started container manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-nn59f |
Started |
Started container manager | |
openstack-operators |
kubelet |
designate-operator-controller-manager-6d8bf5c495-nn59f |
Created |
Created container: manager | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-dbcqg |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" in 4.073s (4.073s including waiting). Image size: 190626789 bytes. | |
openstack-operators |
mariadb-operator-controller-manager-6994f66f48-dgqgn_bd2ee824-a4b7-4a94-9dc0-8df17ba3f5f5 |
7c2a6c6b.openstack.org |
LeaderElection |
mariadb-operator-controller-manager-6994f66f48-dgqgn_bd2ee824-a4b7-4a94-9dc0-8df17ba3f5f5 became leader | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-dgqgn |
Created |
Created container: manager | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-dgqgn |
Started |
Started container manager | |
openstack-operators |
manila-operator-controller-manager-54f6768c69-fnw4p_6eef8cbd-9015-46a8-ac26-2f55bf01e6d0 |
858862a7.openstack.org |
LeaderElection |
manila-operator-controller-manager-54f6768c69-fnw4p_6eef8cbd-9015-46a8-ac26-2f55bf01e6d0 became leader | |
openstack-operators |
manila-operator-controller-manager-54f6768c69-fnw4p_6eef8cbd-9015-46a8-ac26-2f55bf01e6d0 |
858862a7.openstack.org |
LeaderElection |
manila-operator-controller-manager-54f6768c69-fnw4p_6eef8cbd-9015-46a8-ac26-2f55bf01e6d0 became leader | |
openstack-operators |
mariadb-operator-controller-manager-6994f66f48-dgqgn_bd2ee824-a4b7-4a94-9dc0-8df17ba3f5f5 |
7c2a6c6b.openstack.org |
LeaderElection |
mariadb-operator-controller-manager-6994f66f48-dgqgn_bd2ee824-a4b7-4a94-9dc0-8df17ba3f5f5 became leader | |
openstack-operators |
kubelet |
mariadb-operator-controller-manager-6994f66f48-dgqgn |
Created |
Created container: manager | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-58dhd |
Started |
Started container manager | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-ngkpp |
Created |
Created container: manager | |
openstack-operators |
ovn-operator-controller-manager-d44cf6b75-gwh4x_1dfd5420-7f3d-4861-9fd7-02d64a926a3d |
90840a60.openstack.org |
LeaderElection |
ovn-operator-controller-manager-d44cf6b75-gwh4x_1dfd5420-7f3d-4861-9fd7-02d64a926a3d became leader | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-hqlr5 |
Started |
Started container operator | |
openstack-operators |
neutron-operator-controller-manager-64ddbf8bb-5mtgr_22299959-c3ed-4d5a-984f-7d5eaa93f561 |
972c7522.openstack.org |
LeaderElection |
neutron-operator-controller-manager-64ddbf8bb-5mtgr_22299959-c3ed-4d5a-984f-7d5eaa93f561 became leader | |
openstack-operators |
octavia-operator-controller-manager-69f8888797-6sx67_c6c75463-8d8e-4156-a585-bffc785f7d74 |
98809e87.openstack.org |
LeaderElection |
octavia-operator-controller-manager-69f8888797-6sx67_c6c75463-8d8e-4156-a585-bffc785f7d74 became leader | |
openstack-operators |
cinder-operator-controller-manager-5d946d989d-6mnh8_7add9ca6-847a-4763-b02d-a578b2689e9c |
a6b6a260.openstack.org |
LeaderElection |
cinder-operator-controller-manager-5d946d989d-6mnh8_7add9ca6-847a-4763-b02d-a578b2689e9c became leader | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
Started |
Started container manager | |
openstack-operators |
test-operator-controller-manager-7866795846-2vx66_6fd01f8b-b986-4b04-882f-caa2e172bc85 |
6cce095b.openstack.org |
LeaderElection |
test-operator-controller-manager-7866795846-2vx66_6fd01f8b-b986-4b04-882f-caa2e172bc85 became leader | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-dbcqg |
Started |
Started container manager | |
openstack-operators |
barbican-operator-controller-manager-868647ff47-58dhd_ffdd02b7-cb34-485b-a27d-909116b67226 |
8cc931b9.openstack.org |
LeaderElection |
barbican-operator-controller-manager-868647ff47-58dhd_ffdd02b7-cb34-485b-a27d-909116b67226 became leader | |
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-hqlr5_728ca9ea-a997-4998-86f2-22b68e7e9c64 |
rabbitmq-cluster-operator-leader-election |
LeaderElection |
rabbitmq-cluster-operator-manager-668c99d594-hqlr5_728ca9ea-a997-4998-86f2-22b68e7e9c64 became leader | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-58dhd |
Created |
Created container: manager | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-dbcqg |
Created |
Created container: manager | |
openstack-operators |
swift-operator-controller-manager-68f46476f-zdksg_8f1370df-48d0-4ef7-a095-e2ef5f787e2a |
83821f12.openstack.org |
LeaderElection |
swift-operator-controller-manager-68f46476f-zdksg_8f1370df-48d0-4ef7-a095-e2ef5f787e2a became leader | |
openstack-operators |
swift-operator-controller-manager-68f46476f-zdksg_8f1370df-48d0-4ef7-a095-e2ef5f787e2a |
83821f12.openstack.org |
LeaderElection |
swift-operator-controller-manager-68f46476f-zdksg_8f1370df-48d0-4ef7-a095-e2ef5f787e2a became leader | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-5mtgr |
Started |
Started container manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-5mtgr |
Created |
Created container: manager | |
openstack-operators |
barbican-operator-controller-manager-868647ff47-58dhd_ffdd02b7-cb34-485b-a27d-909116b67226 |
8cc931b9.openstack.org |
LeaderElection |
barbican-operator-controller-manager-868647ff47-58dhd_ffdd02b7-cb34-485b-a27d-909116b67226 became leader | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-2wdk9 |
Started |
Started container manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-5mtgr |
Created |
Created container: manager | |
openstack-operators |
kubelet |
neutron-operator-controller-manager-64ddbf8bb-5mtgr |
Started |
Started container manager | |
openstack-operators |
ovn-operator-controller-manager-d44cf6b75-gwh4x_1dfd5420-7f3d-4861-9fd7-02d64a926a3d |
90840a60.openstack.org |
LeaderElection |
ovn-operator-controller-manager-d44cf6b75-gwh4x_1dfd5420-7f3d-4861-9fd7-02d64a926a3d became leader | |
openstack-operators |
neutron-operator-controller-manager-64ddbf8bb-5mtgr_22299959-c3ed-4d5a-984f-7d5eaa93f561 |
972c7522.openstack.org |
LeaderElection |
neutron-operator-controller-manager-64ddbf8bb-5mtgr_22299959-c3ed-4d5a-984f-7d5eaa93f561 became leader | |
openstack-operators |
octavia-operator-controller-manager-69f8888797-6sx67_c6c75463-8d8e-4156-a585-bffc785f7d74 |
98809e87.openstack.org |
LeaderElection |
octavia-operator-controller-manager-69f8888797-6sx67_c6c75463-8d8e-4156-a585-bffc785f7d74 became leader | |
openstack-operators |
cinder-operator-controller-manager-5d946d989d-6mnh8_7add9ca6-847a-4763-b02d-a578b2689e9c |
a6b6a260.openstack.org |
LeaderElection |
cinder-operator-controller-manager-5d946d989d-6mnh8_7add9ca6-847a-4763-b02d-a578b2689e9c became leader | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-2wdk9 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-hqlr5 |
Created |
Created container: operator | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-hqlr5 |
Started |
Started container operator | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-wk82b |
Created |
Created container: manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-wk82b |
Started |
Started container manager | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-58dhd |
Created |
Created container: manager | |
openstack-operators |
kubelet |
barbican-operator-controller-manager-868647ff47-58dhd |
Started |
Started container manager | |
openstack-operators |
heat-operator-controller-manager-69f49c598c-ngkpp_d375dcd1-3413-4083-a77e-1c035d356031 |
c3c8b535.openstack.org |
LeaderElection |
heat-operator-controller-manager-69f49c598c-ngkpp_d375dcd1-3413-4083-a77e-1c035d356031 became leader | |
openstack-operators |
heat-operator-controller-manager-69f49c598c-ngkpp_d375dcd1-3413-4083-a77e-1c035d356031 |
c3c8b535.openstack.org |
LeaderElection |
heat-operator-controller-manager-69f49c598c-ngkpp_d375dcd1-3413-4083-a77e-1c035d356031 became leader | |
openstack-operators |
glance-operator-controller-manager-77987464f4-sqmnn_6cae431b-ad6c-40d6-8773-cb6169a1f1c5 |
c569355b.openstack.org |
LeaderElection |
glance-operator-controller-manager-77987464f4-sqmnn_6cae431b-ad6c-40d6-8773-cb6169a1f1c5 became leader | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-6sx67 |
Started |
Started container manager | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-6sx67 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-zdksg |
Created |
Created container: manager | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-ngkpp |
Started |
Started container manager | |
openstack-operators |
glance-operator-controller-manager-77987464f4-sqmnn_6cae431b-ad6c-40d6-8773-cb6169a1f1c5 |
c569355b.openstack.org |
LeaderElection |
glance-operator-controller-manager-77987464f4-sqmnn_6cae431b-ad6c-40d6-8773-cb6169a1f1c5 became leader | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-ngkpp |
Created |
Created container: manager | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-zdksg |
Started |
Started container manager | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-dbcqg |
Started |
Started container manager | |
openstack-operators |
kubelet |
placement-operator-controller-manager-8497b45c89-dbcqg |
Created |
Created container: manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-6mnh8 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-2vx66 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-6mnh8 |
Started |
Started container manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-6mnh8 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
cinder-operator-controller-manager-5d946d989d-6mnh8 |
Started |
Started container manager | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-2vx66 |
Started |
Started container manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-wk82b |
Created |
Created container: manager | |
openstack-operators |
kubelet |
telemetry-operator-controller-manager-7f45b4ff68-wk82b |
Started |
Started container manager | |
openstack-operators |
rabbitmq-cluster-operator-manager-668c99d594-hqlr5_728ca9ea-a997-4998-86f2-22b68e7e9c64 |
rabbitmq-cluster-operator-leader-election |
LeaderElection |
rabbitmq-cluster-operator-manager-668c99d594-hqlr5_728ca9ea-a997-4998-86f2-22b68e7e9c64 became leader | |
openstack-operators |
test-operator-controller-manager-7866795846-2vx66_6fd01f8b-b986-4b04-882f-caa2e172bc85 |
6cce095b.openstack.org |
LeaderElection |
test-operator-controller-manager-7866795846-2vx66_6fd01f8b-b986-4b04-882f-caa2e172bc85 became leader | |
openstack-operators |
keystone-operator-controller-manager-b4d948c87-xnzn6_69f3d840-b6bf-4f7e-be76-6e304eec2227 |
6012128b.openstack.org |
LeaderElection |
keystone-operator-controller-manager-b4d948c87-xnzn6_69f3d840-b6bf-4f7e-be76-6e304eec2227 became leader | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-2td54 |
Created |
Created container: manager | |
openstack-operators |
horizon-operator-controller-manager-5b9b8895d5-2wdk9_0327e42e-47b1-4ee7-b9d9-497f14808966 |
5ad2eba0.openstack.org |
LeaderElection |
horizon-operator-controller-manager-5b9b8895d5-2wdk9_0327e42e-47b1-4ee7-b9d9-497f14808966 became leader | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-2td54 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-2td54 |
Started |
Started container manager | |
openstack-operators |
nova-operator-controller-manager-567668f5cf-2td54_57cf62f8-0091-4897-b8b1-ddacce6c6ed0 |
f33036c1.openstack.org |
LeaderElection |
nova-operator-controller-manager-567668f5cf-2td54_57cf62f8-0091-4897-b8b1-ddacce6c6ed0 became leader | |
openstack-operators |
ironic-operator-controller-manager-554564d7fc-x78p9_4adae3db-8f3c-4be2-b51b-8772b4e0e666 |
f92b5c2d.openstack.org |
LeaderElection |
ironic-operator-controller-manager-554564d7fc-x78p9_4adae3db-8f3c-4be2-b51b-8772b4e0e666 became leader | |
openstack-operators |
kubelet |
nova-operator-controller-manager-567668f5cf-2td54 |
Started |
Started container manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-sqmnn |
Started |
Started container manager | |
openstack-operators |
telemetry-operator-controller-manager-7f45b4ff68-wk82b_a55d2960-c999-4ef1-9931-b65f6bb34eb9 |
fa1814a2.openstack.org |
LeaderElection |
telemetry-operator-controller-manager-7f45b4ff68-wk82b_a55d2960-c999-4ef1-9931-b65f6bb34eb9 became leader | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-sqmnn |
Created |
Created container: manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-sqmnn |
Created |
Created container: manager | |
openstack-operators |
kubelet |
glance-operator-controller-manager-77987464f4-sqmnn |
Started |
Started container manager | |
openstack-operators |
watcher-operator-controller-manager-5db88f68c-ctk27_9b38c6f0-074a-41c0-9b30-e1babb70950d |
5049980f.openstack.org |
LeaderElection |
watcher-operator-controller-manager-5db88f68c-ctk27_9b38c6f0-074a-41c0-9b30-e1babb70950d became leader | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
Started |
Started container manager | |
openstack-operators |
kubelet |
ovn-operator-controller-manager-d44cf6b75-gwh4x |
Created |
Created container: manager | |
openstack-operators |
kubelet |
heat-operator-controller-manager-69f49c598c-ngkpp |
Started |
Started container manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-2wdk9 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
horizon-operator-controller-manager-5b9b8895d5-2wdk9 |
Started |
Started container manager | |
openstack-operators |
keystone-operator-controller-manager-b4d948c87-xnzn6_69f3d840-b6bf-4f7e-be76-6e304eec2227 |
6012128b.openstack.org |
LeaderElection |
keystone-operator-controller-manager-b4d948c87-xnzn6_69f3d840-b6bf-4f7e-be76-6e304eec2227 became leader | |
openstack-operators |
horizon-operator-controller-manager-5b9b8895d5-2wdk9_0327e42e-47b1-4ee7-b9d9-497f14808966 |
5ad2eba0.openstack.org |
LeaderElection |
horizon-operator-controller-manager-5b9b8895d5-2wdk9_0327e42e-47b1-4ee7-b9d9-497f14808966 became leader | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-2vx66 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
test-operator-controller-manager-7866795846-2vx66 |
Started |
Started container manager | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-xnzn6 |
Started |
Started container manager | |
openstack-operators |
watcher-operator-controller-manager-5db88f68c-ctk27_9b38c6f0-074a-41c0-9b30-e1babb70950d |
5049980f.openstack.org |
LeaderElection |
watcher-operator-controller-manager-5db88f68c-ctk27_9b38c6f0-074a-41c0-9b30-e1babb70950d became leader | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-xnzn6 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-ctk27 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-ctk27 |
Started |
Started container manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-ctk27 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
rabbitmq-cluster-operator-manager-668c99d594-hqlr5 |
Created |
Created container: operator | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-6sx67 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
octavia-operator-controller-manager-69f8888797-6sx67 |
Started |
Started container manager | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-x78p9 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-x78p9 |
Started |
Started container manager | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-xnzn6 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
keystone-operator-controller-manager-b4d948c87-xnzn6 |
Started |
Started container manager | |
openstack-operators |
nova-operator-controller-manager-567668f5cf-2td54_57cf62f8-0091-4897-b8b1-ddacce6c6ed0 |
f33036c1.openstack.org |
LeaderElection |
nova-operator-controller-manager-567668f5cf-2td54_57cf62f8-0091-4897-b8b1-ddacce6c6ed0 became leader | |
openstack-operators |
ironic-operator-controller-manager-554564d7fc-x78p9_4adae3db-8f3c-4be2-b51b-8772b4e0e666 |
f92b5c2d.openstack.org |
LeaderElection |
ironic-operator-controller-manager-554564d7fc-x78p9_4adae3db-8f3c-4be2-b51b-8772b4e0e666 became leader | |
openstack-operators |
telemetry-operator-controller-manager-7f45b4ff68-wk82b_a55d2960-c999-4ef1-9931-b65f6bb34eb9 |
fa1814a2.openstack.org |
LeaderElection |
telemetry-operator-controller-manager-7f45b4ff68-wk82b_a55d2960-c999-4ef1-9931-b65f6bb34eb9 became leader | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-zdksg |
Started |
Started container manager | |
openstack-operators |
kubelet |
swift-operator-controller-manager-68f46476f-zdksg |
Created |
Created container: manager | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-x78p9 |
Created |
Created container: manager | |
openstack-operators |
kubelet |
watcher-operator-controller-manager-5db88f68c-ctk27 |
Started |
Started container manager | |
openstack-operators |
kubelet |
ironic-operator-controller-manager-554564d7fc-x78p9 |
Started |
Started container manager | |
openstack-operators |
placement-operator-controller-manager-8497b45c89-dbcqg_568fd44d-a3d5-4ffa-aa86-650bad6e632d |
73d6b7ce.openstack.org |
LeaderElection |
placement-operator-controller-manager-8497b45c89-dbcqg_568fd44d-a3d5-4ffa-aa86-650bad6e632d became leader | |
openstack-operators |
placement-operator-controller-manager-8497b45c89-dbcqg_568fd44d-a3d5-4ffa-aa86-650bad6e632d |
73d6b7ce.openstack.org |
LeaderElection |
placement-operator-controller-manager-8497b45c89-dbcqg_568fd44d-a3d5-4ffa-aa86-650bad6e632d became leader | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-2x4ww |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" in 4.007s (4.007s including waiting). Image size: 192826291 bytes. | |
openstack-operators |
infra-operator-controller-manager-5f879c76b6-2x4ww_bcb904a6-649a-4eee-99a6-cb3a2a5a06d0 |
c8c223a1.openstack.org |
LeaderElection |
infra-operator-controller-manager-5f879c76b6-2x4ww_bcb904a6-649a-4eee-99a6-cb3a2a5a06d0 became leader | |
openstack-operators |
infra-operator-controller-manager-5f879c76b6-2x4ww_bcb904a6-649a-4eee-99a6-cb3a2a5a06d0 |
c8c223a1.openstack.org |
LeaderElection |
infra-operator-controller-manager-5f879c76b6-2x4ww_bcb904a6-649a-4eee-99a6-cb3a2a5a06d0 became leader | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-2x4ww |
Created |
Created container: manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-2x4ww |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" in 4.007s (4.007s including waiting). Image size: 192826291 bytes. | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-2x4ww |
Started |
Started container manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-2x4ww |
Started |
Started container manager | |
openstack-operators |
kubelet |
infra-operator-controller-manager-5f879c76b6-2x4ww |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-98qgl |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7" already present on machine | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" | |
openstack-operators |
openstack-operator-controller-manager-74d597bfd6-98qgl_2cf26ce0-b5a3-4548-824b-ff4a46107d49 |
40ba705e.openstack.org |
LeaderElection |
openstack-operator-controller-manager-74d597bfd6-98qgl_2cf26ce0-b5a3-4548-824b-ff4a46107d49 became leader | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-98qgl |
Created |
Created container: manager | |
openstack-operators |
multus |
openstack-operator-controller-manager-74d597bfd6-98qgl |
AddedInterface |
Add eth0 [10.128.0.170/23] from ovn-kubernetes | |
openstack-operators |
openstack-operator-controller-manager-74d597bfd6-98qgl_2cf26ce0-b5a3-4548-824b-ff4a46107d49 |
40ba705e.openstack.org |
LeaderElection |
openstack-operator-controller-manager-74d597bfd6-98qgl_2cf26ce0-b5a3-4548-824b-ff4a46107d49 became leader | |
openstack-operators |
multus |
openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn |
AddedInterface |
Add eth0 [10.128.0.163/23] from ovn-kubernetes | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-98qgl |
Started |
Started container manager | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-98qgl |
Started |
Started container manager | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-98qgl |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-operator-controller-manager-74d597bfd6-98qgl |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7" already present on machine | |
openstack-operators |
multus |
openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn |
AddedInterface |
Add eth0 [10.128.0.163/23] from ovn-kubernetes | |
openstack-operators |
multus |
openstack-operator-controller-manager-74d597bfd6-98qgl |
AddedInterface |
Add eth0 [10.128.0.170/23] from ovn-kubernetes | |
openstack-operators |
openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn_cac8ddbf-39b6-45f2-8fbc-cd9246eb8965 |
dedc2245.openstack.org |
LeaderElection |
openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn_cac8ddbf-39b6-45f2-8fbc-cd9246eb8965 became leader | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" in 2.044s (2.044s including waiting). Image size: 190527593 bytes. | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn |
Started |
Started container manager | |
openstack-operators |
openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn_cac8ddbf-39b6-45f2-8fbc-cd9246eb8965 |
dedc2245.openstack.org |
LeaderElection |
openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn_cac8ddbf-39b6-45f2-8fbc-cd9246eb8965 became leader | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn |
Created |
Created container: manager | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" in 2.044s (2.044s including waiting). Image size: 190527593 bytes. | |
openstack-operators |
kubelet |
openstack-baremetal-operator-controller-manager-5f8cd6b89br8pdn |
Started |
Started container manager | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29522385 |
SuccessfulCreate |
Created pod: collect-profiles-29522385-7rwjt | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29522385 | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29522385-7rwjt |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29522385-7rwjt |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29522385-7rwjt |
AddedInterface |
Add eth0 [10.128.0.172/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29522385-7rwjt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29522385 |
Completed |
Job completed | |
| (x2) | openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29522385, condition: Complete |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulDelete |
Deleted job collect-profiles-29522340 | |
openstack |
cert-manager-certificates-trigger |
rootca-public |
Issuing |
Issuing certificate as Secret does not exist | |
| (x2) | openstack |
cert-manager-issuers |
rootca-public |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-public" not found |
| (x2) | openstack |
cert-manager-issuers |
rootca-public |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-public" not found |
| (x2) | openstack |
cert-manager-issuers |
rootca-internal |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-internal" not found |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
rootca-internal |
Generated |
Stored new private key in temporary Secret resource "rootca-internal-9pvvm" | |
openstack |
cert-manager-certificates-request-manager |
rootca-internal |
Requested |
Created new CertificateRequest resource "rootca-internal-1" | |
| (x2) | openstack |
cert-manager-issuers |
rootca-internal |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-internal" not found |
openstack |
cert-manager-certificates-issuing |
rootca-public |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-internal-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
rootca-internal-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
rootca-public |
Requested |
Created new CertificateRequest resource "rootca-public-1" | |
openstack |
cert-manager-certificates-key-manager |
rootca-public |
Generated |
Stored new private key in temporary Secret resource "rootca-public-rgh52" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-public-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
rootca-public-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
rootca-internal |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-internal-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-public-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
rootca-libvirt |
Generated |
Stored new private key in temporary Secret resource "rootca-libvirt-hggkb" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-libvirt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
rootca-internal |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-approver |
rootca-libvirt-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-libvirt-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
| (x2) | openstack |
cert-manager-issuers |
rootca-libvirt |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-libvirt" not found |
| (x2) | openstack |
cert-manager-issuers |
rootca-libvirt |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-libvirt" not found |
openstack |
cert-manager-certificates-trigger |
rootca-libvirt |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-request-manager |
rootca-libvirt |
Requested |
Created new CertificateRequest resource "rootca-libvirt-1" | |
| (x2) | openstack |
cert-manager-issuers |
rootca-ovn |
ErrGetKeyPair |
Error getting keypair for CA issuer: secrets "rootca-ovn" not found |
openstack |
cert-manager-certificates-issuing |
rootca-libvirt |
Issuing |
The certificate has been successfully issued | |
| (x2) | openstack |
cert-manager-issuers |
rootca-ovn |
ErrInitIssuer |
Error initializing issuer: secrets "rootca-ovn" not found |
openstack |
cert-manager-certificates-trigger |
rootca-ovn |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
rootca-ovn |
Generated |
Stored new private key in temporary Secret resource "rootca-ovn-6g9rd" | |
| (x3) | openstack |
cert-manager-issuers |
rootca-public |
KeyPairVerified |
Signing CA verified |
openstack |
cert-manager-certificaterequests-issuer-acme |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
rootca-ovn |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rootca-ovn-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
rootca-ovn |
Requested |
Created new CertificateRequest resource "rootca-ovn-1" | |
openstack |
cert-manager-certificaterequests-approver |
rootca-ovn-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-trigger |
rabbitmq-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rootca-ovn-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
rabbitmq-cell1-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
replicaset-controller |
dnsmasq-dns-5c7b6fb887 |
SuccessfulCreate |
Created pod: dnsmasq-dns-5c7b6fb887-tpv9d | |
openstack |
cert-manager-certificates-key-manager |
rabbitmq-cell1-svc |
Generated |
Stored new private key in temporary Secret resource "rabbitmq-cell1-svc-qhcp8" | |
openstack |
cert-manager-certificates-request-manager |
rabbitmq-cell1-svc |
Requested |
Created new CertificateRequest resource "rabbitmq-cell1-svc-1" | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-5c7b6fb887 to 1 | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-cell1-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
metallb-controller |
dnsmasq-dns |
IPAllocated |
Assigned IP ["192.168.122.80"] | |
| (x2) | openstack |
metallb-controller |
dnsmasq-dns |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
cert-manager-certificaterequests-issuer-vault |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
rabbitmq-cell1-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x2) | openstack |
metallb-controller |
dnsmasq-dns |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-7d78499c to 1 | |
openstack |
cert-manager-certificates-key-manager |
rabbitmq-svc |
Generated |
Stored new private key in temporary Secret resource "rabbitmq-svc-2ssms" | |
| (x3) | openstack |
cert-manager-issuers |
rootca-internal |
KeyPairVerified |
Signing CA verified |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rabbitmq-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
replicaset-controller |
dnsmasq-dns-7d78499c |
SuccessfulCreate |
Created pod: dnsmasq-dns-7d78499c-p9rp4 | |
| (x2) | openstack |
metallb-controller |
dnsmasq-dns |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
cert-manager-certificaterequests-issuer-vault |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
statefulset-controller |
rabbitmq-cell1-server |
SuccessfulCreate |
create Claim persistence-rabbitmq-cell1-server-0 Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server success | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
(combined from similar events): created resource rabbitmq-cell1-server of Type *v1.StatefulSet | |
openstack |
replicaset-controller |
dnsmasq-dns-75b66f9649 |
SuccessfulCreate |
Created pod: dnsmasq-dns-75b66f9649-znfnp | |
openstack |
multus |
dnsmasq-dns-7d78499c-p9rp4 |
AddedInterface |
Add eth0 [10.128.0.174/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-request-manager |
rabbitmq-svc |
Requested |
Created new CertificateRequest resource "rabbitmq-svc-1" | |
openstack |
kubelet |
dnsmasq-dns-7d78499c-p9rp4 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-75b66f9649 to 1 from 0 | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-server of Type *v1.RoleBinding | |
openstack |
cert-manager-certificaterequests-issuer-acme |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
rabbitmq-cell1-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-peer-discovery of Type *v1.Role | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-5c7b6fb887 to 0 from 1 | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-nodes of Type *v1.Service | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-server of Type *v1.ServiceAccount | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-server-conf of Type *v1.ConfigMap | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
| (x3) | openstack |
cert-manager-issuers |
rootca-libvirt |
KeyPairVerified |
Signing CA verified |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1 of Type *v1.Service | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-plugins-conf of Type *v1.ConfigMap | |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
persistence-rabbitmq-cell1-server-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-cell1-server-0" | |
openstack |
metallb-controller |
rabbitmq-cell1 |
IPAllocated |
Assigned IP ["172.17.0.86"] | |
openstack |
cert-manager-certificaterequests-issuer-ca |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-default-user of Type *v1.Secret | |
openstack |
cert-manager-certificaterequests-approver |
rabbitmq-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
| (x2) | openstack |
persistentvolume-controller |
persistence-rabbitmq-cell1-server-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
persistentvolume-controller |
persistence-rabbitmq-cell1-server-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
| (x2) | openstack |
metallb-controller |
rabbitmq-cell1 |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
multus |
dnsmasq-dns-5c7b6fb887-tpv9d |
AddedInterface |
Add eth0 [10.128.0.173/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-5c7b6fb887-tpv9d |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" | |
openstack |
statefulset-controller |
rabbitmq-cell1-server |
SuccessfulCreate |
create Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server successful | |
openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulCreate |
created resource rabbitmq-cell1-erlang-cookie of Type *v1.Secret | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
rabbitmq-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
replicaset-controller |
dnsmasq-dns-5c7b6fb887 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-5c7b6fb887-tpv9d | |
| (x2) | openstack |
metallb-controller |
rabbitmq-cell1 |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
cert-manager-certificates-key-manager |
galera-openstack-svc |
Generated |
Stored new private key in temporary Secret resource "galera-openstack-svc-jzbmj" | |
openstack |
replicaset-controller |
dnsmasq-dns-6b98d7b55c |
SuccessfulCreate |
Created pod: dnsmasq-dns-6b98d7b55c-hdh27 | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-server of Type *v1.ServiceAccount | |
| (x3) | openstack |
cert-manager-issuers |
rootca-ovn |
KeyPairVerified |
Signing CA verified |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-server-conf of Type *v1.ConfigMap | |
openstack |
statefulset-controller |
rabbitmq-server |
SuccessfulCreate |
create Pod rabbitmq-server-0 in StatefulSet rabbitmq-server successful | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-plugins-conf of Type *v1.ConfigMap | |
openstack |
cert-manager-certificaterequests-issuer-acme |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
(combined from similar events): created resource rabbitmq-server of Type *v1.StatefulSet | |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
galera-openstack-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
statefulset-controller |
rabbitmq-server |
SuccessfulCreate |
create Claim persistence-rabbitmq-server-0 Pod rabbitmq-server-0 in StatefulSet rabbitmq-server success | |
openstack |
multus |
dnsmasq-dns-75b66f9649-znfnp |
AddedInterface |
Add eth0 [10.128.0.175/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-approver |
galera-openstack-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-7d78499c to 0 from 1 | |
openstack |
persistentvolume-controller |
persistence-rabbitmq-server-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
| (x2) | openstack |
persistentvolume-controller |
persistence-rabbitmq-server-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-peer-discovery of Type *v1.Role | |
openstack |
replicaset-controller |
dnsmasq-dns-7d78499c |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-7d78499c-p9rp4 | |
openstack |
cert-manager-certificates-issuing |
rabbitmq-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
galera-openstack-svc |
Requested |
Created new CertificateRequest resource "galera-openstack-svc-1" | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-6b98d7b55c to 1 from 0 | |
openstack |
kubelet |
dnsmasq-dns-75b66f9649-znfnp |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-default-user of Type *v1.Secret | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-erlang-cookie of Type *v1.Secret | |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
galera-openstack-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-nodes of Type *v1.Service | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq of Type *v1.Service | |
openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulCreate |
created resource rabbitmq-server of Type *v1.RoleBinding | |
| (x2) | openstack |
metallb-controller |
rabbitmq |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
rabbitmq |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
metallb-controller |
rabbitmq |
IPAllocated |
Assigned IP ["172.17.0.85"] | |
openstack |
cert-manager-certificates-issuing |
galera-openstack-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
statefulset-controller |
openstack-galera |
SuccessfulCreate |
create Pod openstack-galera-0 in StatefulSet openstack-galera successful | |
openstack |
multus |
dnsmasq-dns-6b98d7b55c-hdh27 |
AddedInterface |
Add eth0 [10.128.0.176/23] from ovn-kubernetes | |
openstack |
statefulset-controller |
openstack-galera |
SuccessfulCreate |
create Claim mysql-db-openstack-galera-0 Pod openstack-galera-0 in StatefulSet openstack-galera success | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-hdh27 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" | |
openstack |
persistentvolume-controller |
mysql-db-openstack-galera-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
| (x2) | openstack |
persistentvolume-controller |
mysql-db-openstack-galera-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
cert-manager-certificates-trigger |
galera-openstack-cell1-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-cell1-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
persistence-rabbitmq-server-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-server-0" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
galera-openstack-cell1-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
persistence-rabbitmq-cell1-server-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-e758de4e-c517-4fee-b541-38ade33945a2 | |
openstack |
cert-manager-certificates-request-manager |
galera-openstack-cell1-svc |
Requested |
Created new CertificateRequest resource "galera-openstack-cell1-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
galera-openstack-cell1-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
galera-openstack-cell1-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
galera-openstack-cell1-svc |
Generated |
Stored new private key in temporary Secret resource "galera-openstack-cell1-svc-5lrnp" | |
openstack |
statefulset-controller |
openstack-cell1-galera |
SuccessfulCreate |
create Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera successful | |
openstack |
cert-manager-certificates-trigger |
memcached-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
memcached-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
mysql-db-openstack-galera-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-galera-0" | |
openstack |
cert-manager-certificaterequests-approver |
memcached-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
memcached-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
| (x2) | openstack |
persistentvolume-controller |
mysql-db-openstack-cell1-galera-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
persistentvolume-controller |
mysql-db-openstack-cell1-galera-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
statefulset-controller |
openstack-cell1-galera |
SuccessfulCreate |
create Claim mysql-db-openstack-cell1-galera-0 Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera success | |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
persistence-rabbitmq-server-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-47ff1353-8a7c-4230-885c-ac774bd86eb6 | |
openstack |
cert-manager-certificates-key-manager |
memcached-svc |
Generated |
Stored new private key in temporary Secret resource "memcached-svc-chbjh" | |
openstack |
cert-manager-certificates-request-manager |
memcached-svc |
Requested |
Created new CertificateRequest resource "memcached-svc-1" | |
openstack |
cert-manager-certificates-request-manager |
ovn-metrics |
Requested |
Created new CertificateRequest resource "ovn-metrics-1" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
mysql-db-openstack-galera-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-94a0bc6e-ff15-42b7-ae6a-11223236c92d | |
openstack |
cert-manager-certificates-issuing |
memcached-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-key-manager |
ovn-metrics |
Generated |
Stored new private key in temporary Secret resource "ovn-metrics-srlgs" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
ovn-metrics |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
statefulset-controller |
memcached |
SuccessfulCreate |
create Pod memcached-0 in StatefulSet memcached successful | |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
mysql-db-openstack-cell1-galera-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-cell1-galera-0" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovn-metrics-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
ovn-metrics |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-trigger |
ovndbcluster-nb-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovn-metrics-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
ovn-metrics-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
mysql-db-openstack-cell1-galera-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-de643740-318b-440f-840a-7220194fa0e3 | |
openstack |
cert-manager-certificates-key-manager |
ovncontroller-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovncontroller-ovndbs-qjqml" | |
openstack |
cert-manager-certificates-key-manager |
ovndbcluster-nb-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovndbcluster-nb-ovndbs-t6d7l" | |
openstack |
cert-manager-certificates-trigger |
ovnnorthd-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-trigger |
neutron-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
ovnnorthd-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovnnorthd-ovndbs-fm9wm" | |
openstack |
cert-manager-certificates-trigger |
ovncontroller-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
ovncontroller-ovndbs |
Requested |
Created new CertificateRequest resource "ovncontroller-ovndbs-1" | |
openstack |
cert-manager-certificates-request-manager |
ovnnorthd-ovndbs |
Requested |
Created new CertificateRequest resource "ovnnorthd-ovndbs-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovnnorthd-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
ovndbcluster-nb-ovndbs |
Requested |
Created new CertificateRequest resource "ovndbcluster-nb-ovndbs-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-nb-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
ovndbcluster-nb-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovndbcluster-nb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
neutron-ovndbs |
Generated |
Stored new private key in temporary Secret resource "neutron-ovndbs-bxp9x" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovncontroller-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
neutron-ovndbs |
Requested |
Created new CertificateRequest resource "neutron-ovndbs-1" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
ovnnorthd-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovncontroller-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
ovncontroller-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovnnorthd-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
daemonset-controller |
ovn-controller-ovs |
SuccessfulCreate |
Created pod: ovn-controller-ovs-fxgqd | |
openstack |
cert-manager-certificaterequests-approver |
neutron-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-issuing |
ovndbcluster-nb-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
persistentvolume-controller |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
cert-manager-certificates-issuing |
ovncontroller-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
statefulset-controller |
ovsdbserver-nb |
SuccessfulCreate |
create Claim ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb success | |
openstack |
statefulset-controller |
ovsdbserver-nb |
SuccessfulCreate |
create Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb successful | |
| (x2) | openstack |
persistentvolume-controller |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
ovndbcluster-sb-ovndbs |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
daemonset-controller |
ovn-controller |
SuccessfulCreate |
Created pod: ovn-controller-hdbmn | |
openstack |
cert-manager-certificates-key-manager |
ovndbcluster-sb-ovndbs |
Generated |
Stored new private key in temporary Secret resource "ovndbcluster-sb-ovndbs-ktvkx" | |
openstack |
cert-manager-certificates-issuing |
ovnnorthd-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-6f89f539-a4f5-4f3d-b3f7-a3e8da3a6bf8 | |
openstack |
cert-manager-certificates-request-manager |
ovndbcluster-sb-ovndbs |
Requested |
Created new CertificateRequest resource "ovndbcluster-sb-ovndbs-1" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-sb-ovndbs-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
neutron-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
ovndbcluster-sb-ovndbs-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ovndbcluster-sb-ovndbs-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
ovndbcluster-sb-ovndbs |
Issuing |
The certificate has been successfully issued | |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0" | |
| (x3) | openstack |
persistentvolume-controller |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
persistentvolume-controller |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
statefulset-controller |
ovsdbserver-sb |
SuccessfulCreate |
create Claim ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb success | |
openstack |
statefulset-controller |
ovsdbserver-sb |
SuccessfulCreate |
create Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb successful | |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-48760907-599c-4e44-af12-39c3c5bafb5d | |
openstack |
multus |
openstack-cell1-galera-0 |
AddedInterface |
Add eth0 [10.128.0.181/23] from ovn-kubernetes | |
openstack |
multus |
ovn-controller-ovs-fxgqd |
AddedInterface |
Add datacentre [] from openstack/datacentre | |
openstack |
multus |
ovn-controller-ovs-fxgqd |
AddedInterface |
Add eth0 [10.128.0.183/23] from ovn-kubernetes | |
openstack |
multus |
memcached-0 |
AddedInterface |
Add eth0 [10.128.0.178/23] from ovn-kubernetes | |
openstack |
multus |
ovsdbserver-nb-0 |
AddedInterface |
Add eth0 [10.128.0.184/23] from ovn-kubernetes | |
openstack |
multus |
openstack-galera-0 |
AddedInterface |
Add eth0 [10.128.0.180/23] from ovn-kubernetes | |
openstack |
multus |
rabbitmq-server-0 |
AddedInterface |
Add eth0 [10.128.0.179/23] from ovn-kubernetes | |
openstack |
multus |
ovn-controller-hdbmn |
AddedInterface |
Add eth0 [10.128.0.182/23] from ovn-kubernetes | |
openstack |
multus |
rabbitmq-cell1-server-0 |
AddedInterface |
Add eth0 [10.128.0.177/23] from ovn-kubernetes | |
openstack |
kubelet |
memcached-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-memcached@sha256:3c3b6a71bc3205fc3cf7616172526846dac02edd188be775b358a604448e5a66" | |
openstack |
kubelet |
openstack-galera-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" | |
openstack |
kubelet |
ovn-controller-hdbmn |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e" | |
openstack |
kubelet |
rabbitmq-server-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" | |
openstack |
kubelet |
dnsmasq-dns-5c7b6fb887-tpv9d |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-7d78499c-p9rp4 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" in 30.421s (30.421s including waiting). Image size: 678733141 bytes. | |
openstack |
multus |
ovn-controller-ovs-fxgqd |
AddedInterface |
Add tenant [172.19.0.30/24] from openstack/tenant | |
openstack |
kubelet |
ovn-controller-ovs-fxgqd |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:ec79aa2b5613713adc6a686e0efa1aba5bef9b522f9993ca02f39194cb5d3c00" | |
openstack |
kubelet |
dnsmasq-dns-7d78499c-p9rp4 |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-hdh27 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" in 28.172s (28.172s including waiting). Image size: 678733141 bytes. | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:0cea296f038e0b72578239b07ed01bf75ff2c4be033c60cfc793270a2dae1d8a" | |
openstack |
multus |
ovsdbserver-nb-0 |
AddedInterface |
Add internalapi [172.17.0.30/24] from openstack/internalapi | |
openstack |
kubelet |
dnsmasq-dns-5c7b6fb887-tpv9d |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" in 30.44s (30.44s including waiting). Image size: 678733141 bytes. | |
openstack |
kubelet |
dnsmasq-dns-5c7b6fb887-tpv9d |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-75b66f9649-znfnp |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-75b66f9649-znfnp |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-7d78499c-p9rp4 |
Created |
Created container: init | |
openstack |
multus |
ovn-controller-ovs-fxgqd |
AddedInterface |
Add ironic [172.20.1.30/24] from openstack/ironic | |
openstack |
kubelet |
dnsmasq-dns-75b66f9649-znfnp |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" in 29.084s (29.084s including waiting). Image size: 678733141 bytes. | |
openstack |
multus |
ovsdbserver-sb-0 |
AddedInterface |
Add eth0 [10.128.0.185/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-hdh27 |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-hdh27 |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-hdh27 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-hdh27 |
Created |
Created container: dnsmasq-dns | |
openstack |
multus |
ovsdbserver-sb-0 |
AddedInterface |
Add internalapi [172.17.0.31/24] from openstack/internalapi | |
openstack |
kubelet |
dnsmasq-dns-75b66f9649-znfnp |
Failed |
Error: container create failed: mount `/var/lib/kubelet/pods/f7825929-3b0c-402f-9c91-3f6a0e438ea3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:8e9eb8af442386048b725563056463afd390c91419b0e867418596fc5795e18e" | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-hdh27 |
Started |
Started container dnsmasq-dns | |
| (x2) | openstack |
kubelet |
dnsmasq-dns-75b66f9649-znfnp |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openstack |
kubelet |
memcached-0 |
Started |
Started container memcached | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:1dd32e0726b595b7431dd1d1b8055a0f0d236a02584519c2301c080b9f079470" | |
openstack |
kubelet |
dnsmasq-dns-75b66f9649-znfnp |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-75b66f9649-znfnp |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
openstack-galera-0 |
Created |
Created container: mysql-bootstrap | |
openstack |
kubelet |
openstack-galera-0 |
Started |
Started container mysql-bootstrap | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Pulling |
Pulling image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:1dd32e0726b595b7431dd1d1b8055a0f0d236a02584519c2301c080b9f079470" | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Started |
Started container ovsdbserver-sb | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Created |
Created container: ovsdbserver-sb | |
openstack |
kubelet |
ovn-controller-ovs-fxgqd |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:ec79aa2b5613713adc6a686e0efa1aba5bef9b522f9993ca02f39194cb5d3c00" in 8.589s (8.589s including waiting). Image size: 324040208 bytes. | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:8e9eb8af442386048b725563056463afd390c91419b0e867418596fc5795e18e" in 8.166s (8.166s including waiting). Image size: 346597156 bytes. | |
openstack |
kubelet |
ovn-controller-hdbmn |
Started |
Started container ovn-controller | |
openstack |
kubelet |
ovn-controller-hdbmn |
Created |
Created container: ovn-controller | |
openstack |
kubelet |
ovn-controller-hdbmn |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e" in 10.133s (10.133s including waiting). Image size: 346422728 bytes. | |
openstack |
kubelet |
rabbitmq-server-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" in 10.165s (10.165s including waiting). Image size: 304416840 bytes. | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" in 9.867s (9.867s including waiting). Image size: 304416840 bytes. | |
openstack |
kubelet |
ovn-controller-ovs-fxgqd |
Created |
Created container: ovsdb-server-init | |
openstack |
kubelet |
ovn-controller-ovs-fxgqd |
Started |
Started container ovsdb-server-init | |
openstack |
kubelet |
memcached-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-memcached@sha256:3c3b6a71bc3205fc3cf7616172526846dac02edd188be775b358a604448e5a66" in 9.87s (9.87s including waiting). Image size: 277369033 bytes. | |
openstack |
kubelet |
openstack-galera-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" in 10.123s (10.123s including waiting). Image size: 429307202 bytes. | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Started |
Started container ovsdbserver-nb | |
openstack |
kubelet |
memcached-0 |
Created |
Created container: memcached | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Created |
Created container: ovsdbserver-nb | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:0cea296f038e0b72578239b07ed01bf75ff2c4be033c60cfc793270a2dae1d8a" in 8.742s (8.742s including waiting). Image size: 346597156 bytes. | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Started |
Started container mysql-bootstrap | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Created |
Created container: mysql-bootstrap | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" in 10.179s (10.179s including waiting). Image size: 429307202 bytes. | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Created |
Created container: setup-container | |
openstack |
kubelet |
ovn-controller-ovs-fxgqd |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:ec79aa2b5613713adc6a686e0efa1aba5bef9b522f9993ca02f39194cb5d3c00" already present on machine | |
openstack |
kubelet |
rabbitmq-server-0 |
Created |
Created container: setup-container | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Started |
Started container setup-container | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Started |
Started container openstack-network-exporter | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:1dd32e0726b595b7431dd1d1b8055a0f0d236a02584519c2301c080b9f079470" in 1.835s (1.835s including waiting). Image size: 149062972 bytes. | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Started |
Started container openstack-network-exporter | |
openstack |
kubelet |
ovn-controller-ovs-fxgqd |
Created |
Created container: ovsdb-server | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Created |
Created container: openstack-network-exporter | |
openstack |
replicaset-controller |
dnsmasq-dns-75b66f9649 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-75b66f9649-znfnp | |
openstack |
kubelet |
ovsdbserver-nb-0 |
Created |
Created container: openstack-network-exporter | |
openstack |
kubelet |
ovsdbserver-sb-0 |
Pulled |
Successfully pulled image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:1dd32e0726b595b7431dd1d1b8055a0f0d236a02584519c2301c080b9f079470" in 1.856s (1.856s including waiting). Image size: 149062972 bytes. | |
openstack |
kubelet |
rabbitmq-server-0 |
Started |
Started container setup-container | |
openstack |
kubelet |
ovn-controller-ovs-fxgqd |
Started |
Started container ovs-vswitchd | |
openstack |
kubelet |
ovn-controller-ovs-fxgqd |
Started |
Started container ovsdb-server | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-75b66f9649 to 0 from 1 | |
openstack |
kubelet |
ovn-controller-ovs-fxgqd |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:ec79aa2b5613713adc6a686e0efa1aba5bef9b522f9993ca02f39194cb5d3c00" already present on machine | |
openstack |
kubelet |
ovn-controller-ovs-fxgqd |
Created |
Created container: ovs-vswitchd | |
openstack |
kubelet |
dnsmasq-dns-75b66f9649-znfnp |
Killing |
Stopping container dnsmasq-dns | |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulUpdate |
updated resource rabbitmq-cell1-server of Type *v1.StatefulSet |
openstack |
kubelet |
openstack-cell1-galera-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
openstack-galera-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulUpdate |
updated resource rabbitmq-cell1 of Type *v1.Service |
openstack |
kubelet |
openstack-galera-0 |
Created |
Created container: galera | |
openstack |
kubelet |
openstack-galera-0 |
Started |
Started container galera | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Started |
Started container galera | |
openstack |
kubelet |
openstack-cell1-galera-0 |
Created |
Created container: galera | |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulUpdate |
updated resource rabbitmq of Type *v1.Service |
| (x5) | openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulUpdate |
updated resource rabbitmq-server of Type *v1.StatefulSet |
openstack |
daemonset-controller |
ovn-controller-metrics |
SuccessfulCreate |
Created pod: ovn-controller-metrics-wwqh5 | |
openstack |
replicaset-controller |
dnsmasq-dns-6fd854f54c |
SuccessfulCreate |
Created pod: dnsmasq-dns-6fd854f54c-g52n4 | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled up replica set dnsmasq-dns-6fd854f54c to 1 | |
openstack |
statefulset-controller |
ovn-northd |
SuccessfulCreate |
create Pod ovn-northd-0 in StatefulSet ovn-northd successful | |
openstack |
metallb-controller |
swift-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
multus |
dnsmasq-dns-6fd854f54c-g52n4 |
AddedInterface |
Add eth0 [10.128.0.186/23] from ovn-kubernetes | |
| (x2) | openstack |
metallb-controller |
swift-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
persistentvolume-controller |
swift-swift-storage-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
statefulset-controller |
swift-storage |
SuccessfulCreate |
create Claim swift-swift-storage-0 Pod swift-storage-0 in StatefulSet swift-storage success | |
| (x2) | openstack |
metallb-controller |
swift-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
swift-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
kubelet |
dnsmasq-dns-6fd854f54c-g52n4 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
statefulset-controller |
swift-storage |
SuccessfulCreate |
create Pod swift-storage-0 in StatefulSet swift-storage successful | |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
swift-swift-storage-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/swift-swift-storage-0" | |
openstack |
kubelet |
dnsmasq-dns-6fd854f54c-g52n4 |
Created |
Created container: init | |
openstack |
multus |
ovn-controller-metrics-wwqh5 |
AddedInterface |
Add eth0 [10.128.0.187/23] from ovn-kubernetes | |
openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
Scaled down replica set dnsmasq-dns-6fd854f54c to 0 from 1 | |
openstack |
kubelet |
ovn-controller-metrics-wwqh5 |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:1dd32e0726b595b7431dd1d1b8055a0f0d236a02584519c2301c080b9f079470" already present on machine | |
openstack |
kubelet |
ovn-controller-metrics-wwqh5 |
Created |
Created container: openstack-network-exporter | |
openstack |
kubelet |
dnsmasq-dns-6fd854f54c-g52n4 |
Started |
Started container init | |
openstack |
replicaset-controller |
dnsmasq-dns-6fd854f54c |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-6fd854f54c-g52n4 | |
openstack |
cert-manager-certificates-trigger |
swift-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
multus |
ovn-northd-0 |
AddedInterface |
Add eth0 [10.128.0.188/23] from ovn-kubernetes | |
openstack |
kubelet |
ovn-controller-metrics-wwqh5 |
Started |
Started container openstack-network-exporter | |
| (x2) | openstack |
persistentvolume-controller |
swift-swift-storage-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
replicaset-controller |
dnsmasq-dns-6fd49994df |
SuccessfulCreate |
Created pod: dnsmasq-dns-6fd49994df-55jsp | |
openstack |
cert-manager-certificates-trigger |
swift-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
ovn-northd-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:4790f0ac5f6443e645ea56c3e8c91695871c912f83ef4804c646319e95e2f17a" | |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
swift-swift-storage-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-58883985-d49a-4529-bbce-ec8f3e112255 | |
openstack |
multus |
dnsmasq-dns-6fd49994df-55jsp |
AddedInterface |
Add eth0 [10.128.0.189/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-55jsp |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-55jsp |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-55jsp |
Started |
Started container init | |
openstack |
cert-manager-certificates-issuing |
swift-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
swift-public-svc |
Requested |
Created new CertificateRequest resource "swift-public-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
swift-public-svc |
Generated |
Stored new private key in temporary Secret resource "swift-public-svc-p7z72" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-vault |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
swift-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
swift-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
swift-internal-svc |
Generated |
Stored new private key in temporary Secret resource "swift-internal-svc-4wzkr" | |
openstack |
cert-manager-certificates-request-manager |
swift-internal-svc |
Requested |
Created new CertificateRequest resource "swift-internal-svc-1" | |
openstack |
cert-manager-certificates-issuing |
swift-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-approver |
swift-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
swift-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
swift-public-route |
Generated |
Stored new private key in temporary Secret resource "swift-public-route-2vxhj" | |
openstack |
job-controller |
swift-ring-rebalance |
SuccessfulCreate |
Created pod: swift-ring-rebalance-4xb95 | |
openstack |
kubelet |
ovn-northd-0 |
Pulled |
Container image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:1dd32e0726b595b7431dd1d1b8055a0f0d236a02584519c2301c080b9f079470" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-55jsp |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-55jsp |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-55jsp |
Started |
Started container dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
swift-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
ovn-northd-0 |
Started |
Started container ovn-northd | |
openstack |
cert-manager-certificaterequests-issuer-ca |
swift-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
kubelet |
ovn-northd-0 |
Created |
Created container: ovn-northd | |
openstack |
kubelet |
ovn-northd-0 |
Started |
Started container openstack-network-exporter | |
openstack |
kubelet |
ovn-northd-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:4790f0ac5f6443e645ea56c3e8c91695871c912f83ef4804c646319e95e2f17a" in 1.058s (1.058s including waiting). Image size: 346594251 bytes. | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
swift-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
swift-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
swift-public-route |
Requested |
Created new CertificateRequest resource "swift-public-route-1" | |
openstack |
kubelet |
ovn-northd-0 |
Created |
Created container: openstack-network-exporter | |
openstack |
cert-manager-certificates-trigger |
swift-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
multus |
swift-ring-rebalance-4xb95 |
AddedInterface |
Add eth0 [10.128.0.191/23] from ovn-kubernetes | |
openstack |
kubelet |
swift-ring-rebalance-4xb95 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:32aab2bf162442b5c6bbb3716fbdb0ec53cb67d6b0e7f018766b29cd8cb8692d" | |
openstack |
job-controller |
keystone-737e-account-create-update |
SuccessfulCreate |
Created pod: keystone-737e-account-create-update-z4wjt | |
openstack |
job-controller |
placement-db-create |
SuccessfulCreate |
Created pod: placement-db-create-kjk8x | |
openstack |
job-controller |
placement-094f-account-create-update |
SuccessfulCreate |
Created pod: placement-094f-account-create-update-9dg59 | |
openstack |
job-controller |
keystone-db-create |
SuccessfulCreate |
Created pod: keystone-db-create-trh26 | |
openstack |
kubelet |
swift-ring-rebalance-4xb95 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:32aab2bf162442b5c6bbb3716fbdb0ec53cb67d6b0e7f018766b29cd8cb8692d" in 3.499s (3.499s including waiting). Image size: 500018961 bytes. | |
openstack |
kubelet |
placement-094f-account-create-update-9dg59 |
Started |
Started container mariadb-account-create-update | |
openstack |
multus |
placement-db-create-kjk8x |
AddedInterface |
Add eth0 [10.128.0.195/23] from ovn-kubernetes | |
openstack |
kubelet |
placement-db-create-kjk8x |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
swift-ring-rebalance-4xb95 |
Started |
Started container swift-ring-rebalance | |
openstack |
multus |
placement-094f-account-create-update-9dg59 |
AddedInterface |
Add eth0 [10.128.0.194/23] from ovn-kubernetes | |
openstack |
kubelet |
placement-094f-account-create-update-9dg59 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
placement-094f-account-create-update-9dg59 |
Created |
Created container: mariadb-account-create-update | |
openstack |
multus |
keystone-737e-account-create-update-z4wjt |
AddedInterface |
Add eth0 [10.128.0.192/23] from ovn-kubernetes | |
openstack |
kubelet |
keystone-737e-account-create-update-z4wjt |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
keystone-737e-account-create-update-z4wjt |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
placement-db-create-kjk8x |
Created |
Created container: mariadb-database-create | |
openstack |
job-controller |
glance-db-create |
SuccessfulCreate |
Created pod: glance-db-create-qfrvt | |
openstack |
kubelet |
keystone-db-create-trh26 |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
keystone-737e-account-create-update-z4wjt |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
keystone-db-create-trh26 |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
keystone-db-create-trh26 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
swift-ring-rebalance-4xb95 |
Created |
Created container: swift-ring-rebalance | |
openstack |
multus |
keystone-db-create-trh26 |
AddedInterface |
Add eth0 [10.128.0.193/23] from ovn-kubernetes | |
openstack |
kubelet |
placement-db-create-kjk8x |
Started |
Started container mariadb-database-create | |
openstack |
multus |
glance-db-create-qfrvt |
AddedInterface |
Add eth0 [10.128.0.196/23] from ovn-kubernetes | |
| (x5) | openstack |
kubelet |
swift-storage-0 |
FailedMount |
MountVolume.SetUp failed for volume "etc-swift" : configmap "swift-ring-files" not found |
openstack |
kubelet |
glance-4c91-account-create-update-b2plp |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
glance-db-create-qfrvt |
Started |
Started container mariadb-database-create | |
openstack |
job-controller |
glance-4c91-account-create-update |
SuccessfulCreate |
Created pod: glance-4c91-account-create-update-b2plp | |
openstack |
kubelet |
glance-db-create-qfrvt |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
multus |
glance-4c91-account-create-update-b2plp |
AddedInterface |
Add eth0 [10.128.0.197/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-db-create-qfrvt |
Created |
Created container: mariadb-database-create | |
openstack |
replicaset-controller |
dnsmasq-dns-6b98d7b55c |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-6b98d7b55c-hdh27 | |
openstack |
kubelet |
glance-4c91-account-create-update-b2plp |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
glance-4c91-account-create-update-b2plp |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-hdh27 |
Killing |
Stopping container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-6b98d7b55c-hdh27 |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.176:5353: connect: connection refused | |
openstack |
job-controller |
root-account-create-update |
SuccessfulCreate |
Created pod: root-account-create-update-sqtzz | |
openstack |
kubelet |
root-account-create-update-sqtzz |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
job-controller |
placement-db-create |
Completed |
Job completed | |
openstack |
kubelet |
root-account-create-update-sqtzz |
Created |
Created container: mariadb-account-create-update | |
openstack |
job-controller |
keystone-db-create |
Completed |
Job completed | |
openstack |
job-controller |
glance-db-create |
Completed |
Job completed | |
openstack |
multus |
root-account-create-update-sqtzz |
AddedInterface |
Add eth0 [10.128.0.198/23] from ovn-kubernetes | |
openstack |
job-controller |
placement-094f-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
keystone-737e-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
root-account-create-update-sqtzz |
Started |
Started container mariadb-account-create-update | |
openstack |
job-controller |
glance-4c91-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
glance-db-sync |
SuccessfulCreate |
Created pod: glance-db-sync-88f2d | |
openstack |
multus |
glance-db-sync-88f2d |
AddedInterface |
Add eth0 [10.128.0.199/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-db-sync-88f2d |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" | |
openstack |
multus |
glance-db-sync-88f2d |
AddedInterface |
Add storage [172.18.0.30/24] from openstack/storage | |
openstack |
job-controller |
root-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
swift-ring-rebalance |
Completed |
Job completed | |
openstack |
kubelet |
swift-storage-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d" | |
openstack |
multus |
swift-storage-0 |
AddedInterface |
Add eth0 [10.128.0.190/23] from ovn-kubernetes | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d" in 1.237s (1.237s including waiting). Image size: 444958214 bytes. | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-server | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-server | |
openstack |
job-controller |
ovn-controller-hdbmn-config |
SuccessfulCreate |
Created pod: ovn-controller-hdbmn-config-6g5c8 | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-replicator | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-replicator | |
openstack |
job-controller |
root-account-create-update |
SuccessfulCreate |
Created pod: root-account-create-update-tdkt8 | |
| (x2) | openstack |
kubelet |
ovn-controller-hdbmn |
Unhealthy |
Readiness probe failed: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status |
openstack |
kubelet |
rabbitmq-server-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d" already present on machine | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" already present on machine | |
openstack |
multus |
ovn-controller-hdbmn-config-6g5c8 |
AddedInterface |
Add eth0 [10.128.0.200/23] from ovn-kubernetes | |
openstack |
multus |
root-account-create-update-tdkt8 |
AddedInterface |
Add eth0 [10.128.0.201/23] from ovn-kubernetes | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-auditor | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-auditor | |
openstack |
kubelet |
rabbitmq-server-0 |
Started |
Started container rabbitmq | |
openstack |
kubelet |
rabbitmq-server-0 |
Created |
Created container: rabbitmq | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Created |
Created container: rabbitmq | |
openstack |
kubelet |
rabbitmq-cell1-server-0 |
Started |
Started container rabbitmq | |
openstack |
kubelet |
root-account-create-update-tdkt8 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d" already present on machine | |
openstack |
kubelet |
ovn-controller-hdbmn-config-6g5c8 |
Created |
Created container: ovn-config | |
openstack |
kubelet |
glance-db-sync-88f2d |
Started |
Started container glance-db-sync | |
openstack |
kubelet |
glance-db-sync-88f2d |
Created |
Created container: glance-db-sync | |
openstack |
kubelet |
ovn-controller-hdbmn-config-6g5c8 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e" already present on machine | |
openstack |
kubelet |
root-account-create-update-tdkt8 |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
ovn-controller-hdbmn-config-6g5c8 |
Started |
Started container ovn-config | |
openstack |
kubelet |
root-account-create-update-tdkt8 |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: account-reaper | |
openstack |
kubelet |
swift-storage-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:84fc7b1f4a5e6848eb35976883d0e29ab556ebce6fb6c37fc6a3a4a77c9c8ea8" | |
openstack |
kubelet |
glance-db-sync-88f2d |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" in 12.806s (12.806s including waiting). Image size: 982743920 bytes. | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container account-reaper | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:84fc7b1f4a5e6848eb35976883d0e29ab556ebce6fb6c37fc6a3a4a77c9c8ea8" in 1.108s (1.108s including waiting). Image size: 444974600 bytes. | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: container-server | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container container-server | |
openstack |
kubelet |
swift-storage-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:84fc7b1f4a5e6848eb35976883d0e29ab556ebce6fb6c37fc6a3a4a77c9c8ea8" already present on machine | |
openstack |
kubelet |
swift-storage-0 |
Created |
Created container: container-replicator | |
openstack |
kubelet |
swift-storage-0 |
Started |
Started container container-replicator | |
openstack |
job-controller |
root-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
ovn-controller-hdbmn-config |
Completed |
Job completed | |
openstack |
multus |
dnsmasq-dns-67dc4d787c-m7s4w |
AddedInterface |
Add eth0 [10.128.0.202/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-67dc4d787c-m7s4w |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
replicaset-controller |
dnsmasq-dns-67dc4d787c |
SuccessfulCreate |
Created pod: dnsmasq-dns-67dc4d787c-m7s4w | |
openstack |
rabbitmq-cell1-server-0/rabbitmq_peer_discovery |
pod/rabbitmq-cell1-server-0 |
Created |
Node rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack is registered | |
openstack |
kubelet |
dnsmasq-dns-67dc4d787c-m7s4w |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-67dc4d787c-m7s4w |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-67dc4d787c-m7s4w |
Created |
Created container: init | |
openstack |
rabbitmq-server-0/rabbitmq_peer_discovery |
pod/rabbitmq-server-0 |
Created |
Node rabbit@rabbitmq-server-0.rabbitmq-nodes.openstack is registered | |
openstack |
metallb-speaker |
rabbitmq |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
job-controller |
cinder-db-create |
SuccessfulCreate |
Created pod: cinder-db-create-5fmzp | |
openstack |
job-controller |
keystone-db-sync |
SuccessfulCreate |
Created pod: keystone-db-sync-dqtpw | |
openstack |
job-controller |
cinder-be98-account-create-update |
SuccessfulCreate |
Created pod: cinder-be98-account-create-update-ccwpm | |
openstack |
job-controller |
neutron-406d-account-create-update |
SuccessfulCreate |
Created pod: neutron-406d-account-create-update-qv9dz | |
openstack |
job-controller |
neutron-db-create |
SuccessfulCreate |
Created pod: neutron-db-create-g9g6p | |
openstack |
kubelet |
dnsmasq-dns-67dc4d787c-m7s4w |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-67dc4d787c-m7s4w |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
cinder-be98-account-create-update-ccwpm |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
cinder-db-create-5fmzp |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
keystone-db-sync-dqtpw |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" | |
openstack |
multus |
neutron-406d-account-create-update-qv9dz |
AddedInterface |
Add eth0 [10.128.0.207/23] from ovn-kubernetes | |
openstack |
multus |
cinder-be98-account-create-update-ccwpm |
AddedInterface |
Add eth0 [10.128.0.204/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-be98-account-create-update-ccwpm |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
multus |
keystone-db-sync-dqtpw |
AddedInterface |
Add eth0 [10.128.0.206/23] from ovn-kubernetes | |
openstack |
multus |
neutron-db-create-g9g6p |
AddedInterface |
Add eth0 [10.128.0.205/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-be98-account-create-update-ccwpm |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
cinder-db-create-5fmzp |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
neutron-db-create-g9g6p |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
cinder-db-create-5fmzp |
Created |
Created container: mariadb-database-create | |
openstack |
multus |
cinder-db-create-5fmzp |
AddedInterface |
Add eth0 [10.128.0.203/23] from ovn-kubernetes | |
openstack |
kubelet |
neutron-db-create-g9g6p |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
neutron-406d-account-create-update-qv9dz |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
neutron-406d-account-create-update-qv9dz |
Created |
Created container: mariadb-account-create-update | |
openstack |
job-controller |
glance-db-sync |
Completed |
Job completed | |
openstack |
kubelet |
neutron-db-create-g9g6p |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
neutron-406d-account-create-update-qv9dz |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-67dc4d787c-m7s4w |
Killing |
Stopping container dnsmasq-dns | |
openstack |
cert-manager-certificates-trigger |
glance-default-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
| (x2) | openstack |
metallb-controller |
glance-default-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
glance-default-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
glance-default-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
metallb-controller |
glance-default-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
replicaset-controller |
dnsmasq-dns-67dc4d787c |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-67dc4d787c-m7s4w | |
openstack |
replicaset-controller |
dnsmasq-dns-676f54c559 |
SuccessfulCreate |
Created pod: dnsmasq-dns-676f54c559-bfcw7 | |
openstack |
cert-manager-certificates-key-manager |
glance-default-internal-svc |
Generated |
Stored new private key in temporary Secret resource "glance-default-internal-svc-svzhx" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
glance-default-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
glance-default-internal-svc |
Requested |
Created new CertificateRequest resource "glance-default-internal-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
glance-default-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
multus |
dnsmasq-dns-676f54c559-bfcw7 |
AddedInterface |
Add eth0 [10.128.0.208/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-issuing |
glance-default-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
glance-default-public-svc |
Requested |
Created new CertificateRequest resource "glance-default-public-svc-1" | |
openstack |
job-controller |
cinder-db-create |
Completed |
Job completed | |
openstack |
cert-manager-certificates-key-manager |
glance-default-public-svc |
Generated |
Stored new private key in temporary Secret resource "glance-default-public-svc-g89dv" | |
openstack |
cert-manager-certificates-trigger |
glance-default-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-acme |
glance-default-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
glance-default-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-trigger |
glance-default-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-acme |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
glance-default-public-route |
Generated |
Stored new private key in temporary Secret resource "glance-default-public-route-lvbhf" | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
glance-default-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
glance-default-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-issuing |
glance-default-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
glance-default-public-route |
Requested |
Created new CertificateRequest resource "glance-default-public-route-1" | |
openstack |
kubelet |
keystone-db-sync-dqtpw |
Started |
Started container keystone-db-sync | |
openstack |
kubelet |
keystone-db-sync-dqtpw |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" in 5.932s (5.932s including waiting). Image size: 519933449 bytes. | |
openstack |
kubelet |
dnsmasq-dns-676f54c559-bfcw7 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-676f54c559-bfcw7 |
Created |
Created container: init | |
openstack |
kubelet |
keystone-db-sync-dqtpw |
Created |
Created container: keystone-db-sync | |
openstack |
kubelet |
dnsmasq-dns-676f54c559-bfcw7 |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-676f54c559-bfcw7 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
job-controller |
cinder-be98-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
neutron-406d-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
dnsmasq-dns-676f54c559-bfcw7 |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-676f54c559-bfcw7 |
Started |
Started container dnsmasq-dns | |
openstack |
job-controller |
neutron-db-create |
Completed |
Job completed | |
openstack |
metallb-speaker |
rabbitmq-cell1 |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
replicaset-controller |
dnsmasq-dns-6fd49994df |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-6fd49994df-55jsp | |
openstack |
kubelet |
dnsmasq-dns-6fd49994df-55jsp |
Killing |
Stopping container dnsmasq-dns | |
| (x2) | openstack |
metallb-controller |
keystone-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
job-controller |
cinder-04ef3-db-sync |
SuccessfulCreate |
Created pod: cinder-04ef3-db-sync-smx72 | |
openstack |
job-controller |
keystone-db-sync |
Completed |
Job completed | |
openstack |
job-controller |
keystone-bootstrap |
SuccessfulCreate |
Created pod: keystone-bootstrap-7jqwh | |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
glance-glance-7b9c2-default-external-api-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/glance-glance-7b9c2-default-external-api-0" | |
openstack |
metallb-controller |
keystone-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
| (x2) | openstack |
metallb-controller |
keystone-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
job-controller |
neutron-db-sync |
SuccessfulCreate |
Created pod: neutron-db-sync-kr2xk | |
| (x2) | openstack |
metallb-controller |
keystone-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
statefulset-controller |
glance-7b9c2-default-external-api |
SuccessfulCreate |
create Claim glance-glance-7b9c2-default-external-api-0 Pod glance-7b9c2-default-external-api-0 in StatefulSet glance-7b9c2-default-external-api success | |
openstack |
statefulset-controller |
glance-7b9c2-default-internal-api |
SuccessfulCreate |
create Claim glance-glance-7b9c2-default-internal-api-0 Pod glance-7b9c2-default-internal-api-0 in StatefulSet glance-7b9c2-default-internal-api success | |
openstack |
persistentvolume-controller |
glance-glance-7b9c2-default-internal-api-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
job-controller |
ironic-db-create |
SuccessfulCreate |
Created pod: ironic-db-create-hgvqn | |
openstack |
replicaset-controller |
dnsmasq-dns-68b4779d45 |
SuccessfulCreate |
Created pod: dnsmasq-dns-68b4779d45-4ql8j | |
openstack |
persistentvolume-controller |
glance-glance-7b9c2-default-external-api-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
| (x2) | openstack |
metallb-controller |
placement-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
placement-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x2) | openstack |
persistentvolume-controller |
glance-glance-7b9c2-default-external-api-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
job-controller |
ironic-874a-account-create-update |
SuccessfulCreate |
Created pod: ironic-874a-account-create-update-lhwlv | |
openstack |
multus |
keystone-bootstrap-7jqwh |
AddedInterface |
Add eth0 [10.128.0.210/23] from ovn-kubernetes | |
| (x3) | openstack |
persistentvolume-controller |
glance-glance-7b9c2-default-internal-api-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
multus |
dnsmasq-dns-68b4779d45-4ql8j |
AddedInterface |
Add eth0 [10.128.0.209/23] from ovn-kubernetes | |
| (x2) | openstack |
metallb-controller |
placement-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
metallb-controller |
placement-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
replicaset-controller |
dnsmasq-dns-68b4779d45 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-68b4779d45-4ql8j | |
openstack |
cert-manager-certificates-trigger |
keystone-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
ironic-db-create-hgvqn |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-68b4779d45-4ql8j |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
ironic-db-create-hgvqn |
Started |
Started container mariadb-database-create | |
openstack |
job-controller |
placement-db-sync |
SuccessfulCreate |
Created pod: placement-db-sync-tgjmt | |
openstack |
kubelet |
dnsmasq-dns-68b4779d45-4ql8j |
Created |
Created container: init | |
openstack |
kubelet |
ironic-db-create-hgvqn |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
dnsmasq-dns-68b4779d45-4ql8j |
Started |
Started container init | |
openstack |
replicaset-controller |
dnsmasq-dns-d687b68b9 |
SuccessfulCreate |
Created pod: dnsmasq-dns-d687b68b9-7r7fm | |
openstack |
multus |
neutron-db-sync-kr2xk |
AddedInterface |
Add eth0 [10.128.0.212/23] from ovn-kubernetes | |
openstack |
kubelet |
keystone-bootstrap-7jqwh |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" already present on machine | |
openstack |
kubelet |
keystone-bootstrap-7jqwh |
Created |
Created container: keystone-bootstrap | |
openstack |
kubelet |
keystone-bootstrap-7jqwh |
Started |
Started container keystone-bootstrap | |
openstack |
multus |
ironic-db-create-hgvqn |
AddedInterface |
Add eth0 [10.128.0.211/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-request-manager |
keystone-internal-svc |
Requested |
Created new CertificateRequest resource "keystone-internal-svc-1" | |
openstack |
kubelet |
ironic-874a-account-create-update-lhwlv |
Created |
Created container: mariadb-account-create-update | |
openstack |
cert-manager-certificaterequests-approver |
keystone-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-issuing |
keystone-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
glance-glance-7b9c2-default-external-api-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-2a678bf0-1e2e-44f7-a96e-4d9029ee1884 | |
openstack |
multus |
dnsmasq-dns-d687b68b9-7r7fm |
AddedInterface |
Add eth0 [10.128.0.216/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-trigger |
keystone-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-vault |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-68b4779d45-4ql8j |
Failed |
Error: container create failed: mount `/var/lib/kubelet/pods/090d05ed-b86b-4aba-bbe6-71eb213db07a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory | |
openstack |
kubelet |
dnsmasq-dns-68b4779d45-4ql8j |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-acme |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
keystone-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
keystone-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
glance-glance-7b9c2-default-internal-api-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/glance-glance-7b9c2-default-internal-api-0" | |
openstack |
kubelet |
cinder-04ef3-db-sync-smx72 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" | |
openstack |
multus |
ironic-874a-account-create-update-lhwlv |
AddedInterface |
Add eth0 [10.128.0.214/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-874a-account-create-update-lhwlv |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
cert-manager-certificates-key-manager |
keystone-internal-svc |
Generated |
Stored new private key in temporary Secret resource "keystone-internal-svc-xp89p" | |
openstack |
kubelet |
ironic-874a-account-create-update-lhwlv |
Started |
Started container mariadb-account-create-update | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
keystone-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
keystone-public-svc |
Generated |
Stored new private key in temporary Secret resource "keystone-public-svc-t7k69" | |
openstack |
cert-manager-certificates-request-manager |
keystone-public-svc |
Requested |
Created new CertificateRequest resource "keystone-public-svc-1" | |
openstack |
cert-manager-certificates-issuing |
keystone-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
placement-db-sync-tgjmt |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
multus |
placement-db-sync-tgjmt |
AddedInterface |
Add eth0 [10.128.0.215/23] from ovn-kubernetes | |
openstack |
multus |
cinder-04ef3-db-sync-smx72 |
AddedInterface |
Add eth0 [10.128.0.213/23] from ovn-kubernetes | |
openstack |
kubelet |
neutron-db-sync-kr2xk |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
neutron-db-sync-kr2xk |
Created |
Created container: neutron-db-sync | |
openstack |
kubelet |
neutron-db-sync-kr2xk |
Started |
Started container neutron-db-sync | |
openstack |
kubelet |
dnsmasq-dns-d687b68b9-7r7fm |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-vault |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
keystone-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
glance-glance-7b9c2-default-internal-api-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-a034608b-53d3-45d8-84b2-146bea988703 | |
openstack |
kubelet |
dnsmasq-dns-d687b68b9-7r7fm |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-d687b68b9-7r7fm |
Started |
Started container init | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
placement-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
placement-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-request-manager |
placement-internal-svc |
Requested |
Created new CertificateRequest resource "placement-internal-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
keystone-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
keystone-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
keystone-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
placement-internal-svc |
Generated |
Stored new private key in temporary Secret resource "placement-internal-svc-g8ftq" | |
openstack |
cert-manager-certificates-key-manager |
keystone-public-route |
Generated |
Stored new private key in temporary Secret resource "keystone-public-route-qldvd" | |
openstack |
cert-manager-certificates-request-manager |
keystone-public-route |
Requested |
Created new CertificateRequest resource "keystone-public-route-1" | |
openstack |
cert-manager-certificates-issuing |
keystone-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
placement-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-issuing |
placement-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
dnsmasq-dns-d687b68b9-7r7fm |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-d687b68b9-7r7fm |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-d687b68b9-7r7fm |
Started |
Started container dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-approver |
placement-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
placement-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
job-controller |
ironic-db-create |
Completed |
Job completed | |
openstack |
cert-manager-certificates-key-manager |
placement-public-svc |
Generated |
Stored new private key in temporary Secret resource "placement-public-svc-phkdr" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
placement-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
placement-public-svc |
Requested |
Created new CertificateRequest resource "placement-public-svc-1" | |
openstack |
cert-manager-certificates-issuing |
placement-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-approver |
placement-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-request-manager |
placement-public-route |
Requested |
Created new CertificateRequest resource "placement-public-route-1" | |
openstack |
job-controller |
ironic-874a-account-create-update |
Completed |
Job completed | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
placement-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
placement-public-route |
Generated |
Stored new private key in temporary Secret resource "placement-public-route-2cltn" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
placement-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
glance-7b9c2-default-internal-api-0 |
AddedInterface |
Add eth0 [10.128.0.218/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-issuing |
placement-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
placement-db-sync-tgjmt |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" in 4.674s (4.674s including waiting). Image size: 472479445 bytes. | |
openstack |
kubelet |
placement-db-sync-tgjmt |
Created |
Created container: placement-db-sync | |
openstack |
kubelet |
placement-db-sync-tgjmt |
Started |
Started container placement-db-sync | |
openstack |
multus |
glance-7b9c2-default-internal-api-0 |
AddedInterface |
Add storage [172.18.0.30/24] from openstack/storage | |
openstack |
kubelet |
glance-7b9c2-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
kubelet |
glance-7b9c2-default-internal-api-0 |
Created |
Created container: glance-log | |
openstack |
job-controller |
keystone-bootstrap |
Completed |
Job completed | |
openstack |
kubelet |
glance-7b9c2-default-internal-api-0 |
Started |
Started container glance-log | |
openstack |
kubelet |
glance-7b9c2-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
job-controller |
keystone-bootstrap |
SuccessfulCreate |
Created pod: keystone-bootstrap-lc5mm | |
openstack |
job-controller |
ironic-db-sync |
SuccessfulCreate |
Created pod: ironic-db-sync-8zl8z | |
openstack |
kubelet |
glance-7b9c2-default-internal-api-0 |
Created |
Created container: glance-httpd | |
openstack |
kubelet |
glance-7b9c2-default-internal-api-0 |
Started |
Started container glance-httpd | |
openstack |
kubelet |
dnsmasq-dns-676f54c559-bfcw7 |
Killing |
Stopping container dnsmasq-dns | |
openstack |
replicaset-controller |
dnsmasq-dns-676f54c559 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-676f54c559-bfcw7 | |
| (x2) | openstack |
kubelet |
dnsmasq-dns-676f54c559-bfcw7 |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.208:5353: connect: connection refused |
openstack |
kubelet |
glance-7b9c2-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
kubelet |
cinder-04ef3-db-sync-smx72 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" in 17.732s (17.732s including waiting). Image size: 1160981798 bytes. | |
openstack |
multus |
glance-7b9c2-default-external-api-0 |
AddedInterface |
Add eth0 [10.128.0.219/23] from ovn-kubernetes | |
openstack |
multus |
glance-7b9c2-default-external-api-0 |
AddedInterface |
Add storage [172.18.0.31/24] from openstack/storage | |
openstack |
kubelet |
ironic-db-sync-8zl8z |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf" | |
openstack |
kubelet |
keystone-bootstrap-lc5mm |
Started |
Started container keystone-bootstrap | |
openstack |
job-controller |
placement-db-sync |
Completed |
Job completed | |
openstack |
multus |
ironic-db-sync-8zl8z |
AddedInterface |
Add eth0 [10.128.0.221/23] from ovn-kubernetes | |
openstack |
kubelet |
keystone-bootstrap-lc5mm |
Created |
Created container: keystone-bootstrap | |
openstack |
kubelet |
keystone-bootstrap-lc5mm |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" already present on machine | |
openstack |
multus |
keystone-bootstrap-lc5mm |
AddedInterface |
Add eth0 [10.128.0.220/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-7b9c2-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
kubelet |
glance-7b9c2-default-external-api-0 |
Started |
Started container glance-log | |
openstack |
kubelet |
cinder-04ef3-db-sync-smx72 |
Started |
Started container cinder-04ef3-db-sync | |
openstack |
kubelet |
cinder-04ef3-db-sync-smx72 |
Created |
Created container: cinder-04ef3-db-sync | |
openstack |
replicaset-controller |
placement-5b57c6d9b6 |
SuccessfulCreate |
Created pod: placement-5b57c6d9b6-frt4v | |
openstack |
kubelet |
glance-7b9c2-default-external-api-0 |
Created |
Created container: glance-log | |
openstack |
deployment-controller |
placement |
ScalingReplicaSet |
Scaled up replica set placement-5b57c6d9b6 to 1 | |
openstack |
kubelet |
placement-5b57c6d9b6-frt4v |
Created |
Created container: placement-log | |
openstack |
kubelet |
placement-5b57c6d9b6-frt4v |
Started |
Started container placement-log | |
openstack |
kubelet |
glance-7b9c2-default-external-api-0 |
Created |
Created container: glance-httpd | |
openstack |
multus |
placement-5b57c6d9b6-frt4v |
AddedInterface |
Add eth0 [10.128.0.222/23] from ovn-kubernetes | |
openstack |
kubelet |
placement-5b57c6d9b6-frt4v |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" already present on machine | |
openstack |
kubelet |
glance-7b9c2-default-external-api-0 |
Started |
Started container glance-httpd | |
openstack |
kubelet |
placement-5b57c6d9b6-frt4v |
Started |
Started container placement-api | |
openstack |
kubelet |
placement-5b57c6d9b6-frt4v |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" already present on machine | |
openstack |
kubelet |
placement-5b57c6d9b6-frt4v |
Created |
Created container: placement-api | |
openstack |
kubelet |
ironic-db-sync-8zl8z |
Started |
Started container init | |
openstack |
kubelet |
ironic-db-sync-8zl8z |
Created |
Created container: init | |
openstack |
kubelet |
ironic-db-sync-8zl8z |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf" in 9.098s (9.098s including waiting). Image size: 598771786 bytes. | |
openstack |
replicaset-controller |
keystone-7f77fccc4f |
SuccessfulCreate |
Created pod: keystone-7f77fccc4f-8svgt | |
openstack |
deployment-controller |
keystone |
ScalingReplicaSet |
Scaled up replica set keystone-7f77fccc4f to 1 | |
openstack |
job-controller |
keystone-bootstrap |
Completed |
Job completed | |
openstack |
kubelet |
keystone-7f77fccc4f-8svgt |
Created |
Created container: keystone-api | |
openstack |
multus |
keystone-7f77fccc4f-8svgt |
AddedInterface |
Add eth0 [10.128.0.223/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-db-sync-8zl8z |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf" already present on machine | |
openstack |
kubelet |
ironic-db-sync-8zl8z |
Created |
Created container: ironic-db-sync | |
openstack |
kubelet |
keystone-7f77fccc4f-8svgt |
Started |
Started container keystone-api | |
openstack |
kubelet |
ironic-db-sync-8zl8z |
Started |
Started container ironic-db-sync | |
openstack |
kubelet |
keystone-7f77fccc4f-8svgt |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" already present on machine | |
| (x2) | openstack |
metallb-controller |
cinder-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
replicaset-controller |
dnsmasq-dns-dd74dd7c9 |
SuccessfulCreate |
Created pod: dnsmasq-dns-dd74dd7c9-jfb4s | |
openstack |
metallb-controller |
cinder-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
| (x2) | openstack |
metallb-controller |
cinder-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
cinder-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
job-controller |
cinder-04ef3-db-sync |
Completed |
Job completed | |
openstack |
cert-manager-certificates-key-manager |
cinder-internal-svc |
Generated |
Stored new private key in temporary Secret resource "cinder-internal-svc-6qp7s" | |
openstack |
cert-manager-certificates-trigger |
cinder-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
multus |
cinder-04ef3-backup-0 |
AddedInterface |
Add eth0 [10.128.0.225/23] from ovn-kubernetes | |
openstack |
multus |
cinder-04ef3-scheduler-0 |
AddedInterface |
Add eth0 [10.128.0.224/23] from ovn-kubernetes | |
openstack |
multus |
cinder-04ef3-api-0 |
AddedInterface |
Add eth0 [10.128.0.228/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-issuing |
cinder-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
cinder-04ef3-scheduler-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20" | |
openstack |
multus |
cinder-04ef3-backup-0 |
AddedInterface |
Add storage [172.18.0.32/24] from openstack/storage | |
openstack |
cert-manager-certificates-key-manager |
cinder-public-svc |
Generated |
Stored new private key in temporary Secret resource "cinder-public-svc-jhs2j" | |
openstack |
cert-manager-certificates-trigger |
cinder-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-acme |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
cinder-04ef3-backup-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4" | |
openstack |
kubelet |
cinder-04ef3-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" already present on machine | |
openstack |
multus |
dnsmasq-dns-dd74dd7c9-jfb4s |
AddedInterface |
Add eth0 [10.128.0.227/23] from ovn-kubernetes | |
openstack |
kubelet |
dnsmasq-dns-dd74dd7c9-jfb4s |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
cinder-04ef3-scheduler-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20" in 804ms (804ms including waiting). Image size: 1082812573 bytes. | |
openstack |
cert-manager-certificates-request-manager |
cinder-internal-svc |
Requested |
Created new CertificateRequest resource "cinder-internal-svc-1" | |
openstack |
kubelet |
dnsmasq-dns-dd74dd7c9-jfb4s |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-dd74dd7c9-jfb4s |
Started |
Started container init | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
cinder-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
cinder-04ef3-volume-lvm-iscsi-0 |
AddedInterface |
Add eth0 [10.128.0.226/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-04ef3-volume-lvm-iscsi-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
cinder-04ef3-volume-lvm-iscsi-0 |
Started |
Started container probe | |
openstack |
kubelet |
cinder-04ef3-api-0 |
Created |
Created container: cinder-04ef3-api-log | |
openstack |
kubelet |
cinder-04ef3-api-0 |
Started |
Started container cinder-04ef3-api-log | |
openstack |
kubelet |
cinder-04ef3-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-dd74dd7c9-jfb4s |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-dd74dd7c9-jfb4s |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-dd74dd7c9-jfb4s |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
cinder-04ef3-backup-0 |
Started |
Started container probe | |
openstack |
kubelet |
cinder-04ef3-backup-0 |
Created |
Created container: probe | |
openstack |
kubelet |
cinder-04ef3-scheduler-0 |
Created |
Created container: cinder-scheduler | |
openstack |
kubelet |
cinder-04ef3-scheduler-0 |
Started |
Started container cinder-scheduler | |
openstack |
cert-manager-certificates-issuing |
cinder-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
cinder-04ef3-volume-lvm-iscsi-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386" already present on machine | |
openstack |
kubelet |
cinder-04ef3-volume-lvm-iscsi-0 |
Started |
Started container cinder-volume | |
openstack |
kubelet |
cinder-04ef3-volume-lvm-iscsi-0 |
Created |
Created container: cinder-volume | |
openstack |
kubelet |
cinder-04ef3-volume-lvm-iscsi-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386" in 998ms (998ms including waiting). Image size: 1083753436 bytes. | |
openstack |
cert-manager-certificates-request-manager |
cinder-public-svc |
Requested |
Created new CertificateRequest resource "cinder-public-svc-1" | |
openstack |
kubelet |
cinder-04ef3-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20" already present on machine | |
openstack |
cert-manager-certificaterequests-approver |
cinder-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
cinder-04ef3-volume-lvm-iscsi-0 |
Created |
Created container: probe | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-vault |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
cinder-04ef3-backup-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4" in 1.162s (1.162s including waiting). Image size: 1082817817 bytes. | |
openstack |
kubelet |
cinder-04ef3-backup-0 |
Created |
Created container: cinder-backup | |
openstack |
kubelet |
cinder-04ef3-backup-0 |
Started |
Started container cinder-backup | |
openstack |
kubelet |
cinder-04ef3-backup-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-acme |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
cinder-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
cinder-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
cinder-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
cinder-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
cinder-public-route |
Generated |
Stored new private key in temporary Secret resource "cinder-public-route-5gkxq" | |
openstack |
cert-manager-certificates-request-manager |
cinder-public-route |
Requested |
Created new CertificateRequest resource "cinder-public-route-1" | |
openstack |
cert-manager-certificates-issuing |
cinder-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
cinder-04ef3-scheduler-0 |
Started |
Started container probe | |
openstack |
statefulset-controller |
cinder-04ef3-api |
SuccessfulDelete |
delete Pod cinder-04ef3-api-0 in StatefulSet cinder-04ef3-api successful | |
openstack |
kubelet |
cinder-04ef3-api-0 |
Created |
Created container: cinder-api | |
openstack |
kubelet |
cinder-04ef3-api-0 |
Started |
Started container cinder-api | |
openstack |
kubelet |
cinder-04ef3-scheduler-0 |
Created |
Created container: probe | |
openstack |
kubelet |
cinder-04ef3-api-0 |
Killing |
Stopping container cinder-api | |
openstack |
kubelet |
cinder-04ef3-api-0 |
Killing |
Stopping container cinder-04ef3-api-log | |
| (x2) | openstack |
statefulset-controller |
cinder-04ef3-api |
SuccessfulCreate |
create Pod cinder-04ef3-api-0 in StatefulSet cinder-04ef3-api successful |
| (x2) | openstack |
metallb-controller |
neutron-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x25) | openstack |
metallb-speaker |
dnsmasq-dns |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
replicaset-controller |
dnsmasq-dns-c54fb858c |
SuccessfulCreate |
Created pod: dnsmasq-dns-c54fb858c-f69kf | |
openstack |
kubelet |
cinder-04ef3-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" already present on machine | |
openstack |
multus |
cinder-04ef3-api-0 |
AddedInterface |
Add eth0 [10.128.0.229/23] from ovn-kubernetes | |
openstack |
deployment-controller |
neutron |
ScalingReplicaSet |
Scaled up replica set neutron-5c5cd8d to 1 | |
| (x2) | openstack |
metallb-controller |
neutron-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
neutron-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
replicaset-controller |
dnsmasq-dns-dd74dd7c9 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-dd74dd7c9-jfb4s | |
openstack |
kubelet |
dnsmasq-dns-dd74dd7c9-jfb4s |
Killing |
Stopping container dnsmasq-dns | |
openstack |
metallb-controller |
neutron-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
cert-manager-certificates-trigger |
neutron-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
replicaset-controller |
neutron-5c5cd8d |
SuccessfulCreate |
Created pod: neutron-5c5cd8d-bjbtl | |
openstack |
job-controller |
neutron-db-sync |
Completed |
Job completed | |
openstack |
cert-manager-certificates-request-manager |
neutron-internal-svc |
Requested |
Created new CertificateRequest resource "neutron-internal-svc-1" | |
openstack |
kubelet |
cinder-04ef3-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" already present on machine | |
openstack |
multus |
neutron-5c5cd8d-bjbtl |
AddedInterface |
Add eth0 [10.128.0.231/23] from ovn-kubernetes | |
openstack |
multus |
neutron-5c5cd8d-bjbtl |
AddedInterface |
Add internalapi [172.17.0.32/24] from openstack/internalapi | |
openstack |
cert-manager-certificates-issuing |
neutron-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
neutron-public-svc |
Requested |
Created new CertificateRequest resource "neutron-public-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
neutron-public-svc |
Generated |
Stored new private key in temporary Secret resource "neutron-public-svc-q6mx6" | |
openstack |
cert-manager-certificates-trigger |
neutron-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
multus |
dnsmasq-dns-c54fb858c-f69kf |
AddedInterface |
Add eth0 [10.128.0.230/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-approver |
neutron-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-c54fb858c-f69kf |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
cert-manager-certificates-issuing |
neutron-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-key-manager |
neutron-internal-svc |
Generated |
Stored new private key in temporary Secret resource "neutron-internal-svc-n9r8w" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
neutron-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
cinder-04ef3-api-0 |
Created |
Created container: cinder-04ef3-api-log | |
openstack |
kubelet |
cinder-04ef3-api-0 |
Started |
Started container cinder-04ef3-api-log | |
openstack |
cert-manager-certificates-request-manager |
neutron-public-route |
Requested |
Created new CertificateRequest resource "neutron-public-route-1" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-c54fb858c-f69kf |
Created |
Created container: init | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-c54fb858c-f69kf |
Started |
Started container init | |
openstack |
kubelet |
cinder-04ef3-api-0 |
Started |
Started container cinder-api | |
openstack |
kubelet |
cinder-04ef3-api-0 |
Created |
Created container: cinder-api | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
neutron-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
neutron-5c5cd8d-bjbtl |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
cert-manager-certificates-key-manager |
neutron-public-route |
Generated |
Stored new private key in temporary Secret resource "neutron-public-route-zx6dn" | |
openstack |
kubelet |
neutron-5c5cd8d-bjbtl |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-c54fb858c-f69kf |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-c54fb858c-f69kf |
Created |
Created container: dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-approver |
neutron-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
dnsmasq-dns-c54fb858c-f69kf |
Started |
Started container dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-issuer-ca |
neutron-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
neutron-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
neutron-5c5cd8d-bjbtl |
Created |
Created container: neutron-httpd | |
openstack |
kubelet |
neutron-5c5cd8d-bjbtl |
Started |
Started container neutron-httpd | |
openstack |
kubelet |
neutron-5c5cd8d-bjbtl |
Started |
Started container neutron-api | |
openstack |
cert-manager-certificates-issuing |
neutron-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
neutron-5c5cd8d-bjbtl |
Created |
Created container: neutron-api | |
openstack |
statefulset-controller |
cinder-04ef3-backup |
SuccessfulDelete |
delete Pod cinder-04ef3-backup-0 in StatefulSet cinder-04ef3-backup successful | |
openstack |
multus |
neutron-7c6d47966f-zhq5k |
AddedInterface |
Add eth0 [10.128.0.232/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-04ef3-backup-0 |
Killing |
Stopping container probe | |
openstack |
replicaset-controller |
neutron-7c6d47966f |
SuccessfulCreate |
Created pod: neutron-7c6d47966f-zhq5k | |
openstack |
kubelet |
cinder-04ef3-backup-0 |
Killing |
Stopping container cinder-backup | |
openstack |
deployment-controller |
neutron |
ScalingReplicaSet |
Scaled up replica set neutron-7c6d47966f to 1 | |
openstack |
statefulset-controller |
cinder-04ef3-scheduler |
SuccessfulDelete |
delete Pod cinder-04ef3-scheduler-0 in StatefulSet cinder-04ef3-scheduler successful | |
openstack |
statefulset-controller |
cinder-04ef3-volume-lvm-iscsi |
SuccessfulDelete |
delete Pod cinder-04ef3-volume-lvm-iscsi-0 in StatefulSet cinder-04ef3-volume-lvm-iscsi successful | |
openstack |
kubelet |
cinder-04ef3-scheduler-0 |
Killing |
Stopping container probe | |
openstack |
kubelet |
cinder-04ef3-scheduler-0 |
Killing |
Stopping container cinder-scheduler | |
openstack |
kubelet |
neutron-7c6d47966f-zhq5k |
Created |
Created container: neutron-api | |
openstack |
kubelet |
neutron-7c6d47966f-zhq5k |
Started |
Started container neutron-api | |
openstack |
kubelet |
cinder-04ef3-volume-lvm-iscsi-0 |
Killing |
Stopping container cinder-volume | |
openstack |
kubelet |
cinder-04ef3-volume-lvm-iscsi-0 |
Killing |
Stopping container probe | |
openstack |
kubelet |
neutron-7c6d47966f-zhq5k |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
neutron-7c6d47966f-zhq5k |
Created |
Created container: neutron-httpd | |
openstack |
kubelet |
neutron-7c6d47966f-zhq5k |
Started |
Started container neutron-httpd | |
openstack |
multus |
neutron-7c6d47966f-zhq5k |
AddedInterface |
Add internalapi [172.17.0.33/24] from openstack/internalapi | |
openstack |
kubelet |
neutron-7c6d47966f-zhq5k |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
| (x2) | openstack |
statefulset-controller |
cinder-04ef3-backup |
SuccessfulCreate |
create Pod cinder-04ef3-backup-0 in StatefulSet cinder-04ef3-backup successful |
| (x2) | openstack |
statefulset-controller |
cinder-04ef3-scheduler |
SuccessfulCreate |
create Pod cinder-04ef3-scheduler-0 in StatefulSet cinder-04ef3-scheduler successful |
openstack |
multus |
cinder-04ef3-backup-0 |
AddedInterface |
Add eth0 [10.128.0.233/23] from ovn-kubernetes | |
openstack |
multus |
cinder-04ef3-scheduler-0 |
AddedInterface |
Add eth0 [10.128.0.234/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-04ef3-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20" already present on machine | |
| (x2) | openstack |
statefulset-controller |
cinder-04ef3-volume-lvm-iscsi |
SuccessfulCreate |
create Pod cinder-04ef3-volume-lvm-iscsi-0 in StatefulSet cinder-04ef3-volume-lvm-iscsi successful |
openstack |
multus |
cinder-04ef3-backup-0 |
AddedInterface |
Add storage [172.18.0.32/24] from openstack/storage | |
openstack |
kubelet |
cinder-04ef3-backup-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4" already present on machine | |
openstack |
kubelet |
cinder-04ef3-volume-lvm-iscsi-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386" already present on machine | |
openstack |
multus |
cinder-04ef3-volume-lvm-iscsi-0 |
AddedInterface |
Add eth0 [10.128.0.235/23] from ovn-kubernetes | |
openstack |
kubelet |
cinder-04ef3-scheduler-0 |
Started |
Started container cinder-scheduler | |
openstack |
kubelet |
cinder-04ef3-backup-0 |
Started |
Started container probe | |
openstack |
kubelet |
cinder-04ef3-backup-0 |
Created |
Created container: cinder-backup | |
openstack |
kubelet |
cinder-04ef3-backup-0 |
Started |
Started container cinder-backup | |
openstack |
kubelet |
cinder-04ef3-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20" already present on machine | |
openstack |
kubelet |
cinder-04ef3-backup-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4" already present on machine | |
openstack |
kubelet |
cinder-04ef3-scheduler-0 |
Created |
Created container: cinder-scheduler | |
openstack |
kubelet |
cinder-04ef3-backup-0 |
Created |
Created container: probe | |
openstack |
kubelet |
cinder-04ef3-volume-lvm-iscsi-0 |
Created |
Created container: cinder-volume | |
openstack |
kubelet |
cinder-04ef3-volume-lvm-iscsi-0 |
Started |
Started container probe | |
openstack |
kubelet |
cinder-04ef3-volume-lvm-iscsi-0 |
Started |
Started container cinder-volume | |
openstack |
kubelet |
cinder-04ef3-volume-lvm-iscsi-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386" already present on machine | |
openstack |
kubelet |
cinder-04ef3-scheduler-0 |
Started |
Started container probe | |
openstack |
kubelet |
cinder-04ef3-volume-lvm-iscsi-0 |
Created |
Created container: probe | |
openstack |
kubelet |
cinder-04ef3-scheduler-0 |
Created |
Created container: probe | |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
var-lib-ironic-ironic-conductor-0 |
Provisioning |
External provisioner is provisioning volume for claim "openstack/var-lib-ironic-ironic-conductor-0" | |
| (x2) | openstack |
metallb-controller |
ironic-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
job-controller |
ironic-inspector-db-create |
SuccessfulCreate |
Created pod: ironic-inspector-db-create-vmh7f | |
openstack |
statefulset-controller |
ironic-conductor |
SuccessfulCreate |
create Pod ironic-conductor-0 in StatefulSet ironic-conductor successful | |
openstack |
replicaset-controller |
dnsmasq-dns-c54fb858c |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-c54fb858c-f69kf | |
openstack |
metallb-controller |
ironic-internal |
IPAllocated |
Assigned IP ["192.168.122.80"] | |
openstack |
statefulset-controller |
ironic-conductor |
SuccessfulCreate |
create Claim var-lib-ironic-ironic-conductor-0 Pod ironic-conductor-0 in StatefulSet ironic-conductor success | |
| (x2) | openstack |
metallb-controller |
ironic-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
persistentvolume-controller |
var-lib-ironic-ironic-conductor-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openstack |
persistentvolume-controller |
var-lib-ironic-ironic-conductor-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openstack |
job-controller |
ironic-db-sync |
Completed |
Job completed | |
| (x2) | openstack |
metallb-controller |
ironic-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
deployment-controller |
ironic-neutron-agent |
ScalingReplicaSet |
Scaled up replica set ironic-neutron-agent-88dd96889 to 1 | |
openstack |
kubelet |
dnsmasq-dns-c54fb858c-f69kf |
Unhealthy |
Readiness probe failed: dial tcp 10.128.0.230:5353: connect: connection refused | |
openstack |
replicaset-controller |
dnsmasq-dns-6b9c77ddfc |
SuccessfulCreate |
Created pod: dnsmasq-dns-6b9c77ddfc-d9zgc | |
openstack |
replicaset-controller |
ironic-neutron-agent-88dd96889 |
SuccessfulCreate |
Created pod: ironic-neutron-agent-88dd96889-vwkh6 | |
openstack |
kubelet |
dnsmasq-dns-c54fb858c-f69kf |
Killing |
Stopping container dnsmasq-dns | |
openstack |
deployment-controller |
ironic |
ScalingReplicaSet |
Scaled up replica set ironic-7b6b8d45d to 1 | |
openstack |
job-controller |
ironic-inspector-016b-account-create-update |
SuccessfulCreate |
Created pod: ironic-inspector-016b-account-create-update-v8zdc | |
openstack |
topolvm.io_lvms-operator-59b4cb8ccf-q5dk5_41ba5f2e-d293-4c72-bf87-f4da6e126ac2 |
var-lib-ironic-ironic-conductor-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-61b8ec08-c1ae-4dfd-b80c-a05eee1e3066 | |
openstack |
replicaset-controller |
ironic-7b6b8d45d |
SuccessfulCreate |
Created pod: ironic-7b6b8d45d-l4pv4 | |
openstack |
cert-manager-certificates-trigger |
ironic-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
ironic-inspector-db-create-vmh7f |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
multus |
ironic-inspector-db-create-vmh7f |
AddedInterface |
Add eth0 [10.128.0.236/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-request-manager |
ironic-internal-svc |
Requested |
Created new CertificateRequest resource "ironic-internal-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
ironic-neutron-agent-88dd96889-vwkh6 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:ae2235391072c57f6d1b73edb0ee681884583d13b4493841e9d8e46fe4768320" | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
ironic-7b6b8d45d-l4pv4 |
AddedInterface |
Add eth0 [10.128.0.240/23] from ovn-kubernetes | |
openstack |
multus |
ironic-neutron-agent-88dd96889-vwkh6 |
AddedInterface |
Add eth0 [10.128.0.239/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-key-manager |
ironic-internal-svc |
Generated |
Stored new private key in temporary Secret resource "ironic-internal-svc-btj5l" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
ironic-inspector-016b-account-create-update-v8zdc |
AddedInterface |
Add eth0 [10.128.0.237/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-inspector-016b-account-create-update-v8zdc |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
ironic-inspector-db-create-vmh7f |
Created |
Created container: mariadb-database-create | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
ironic-inspector-db-create-vmh7f |
Started |
Started container mariadb-database-create | |
openstack |
multus |
dnsmasq-dns-6b9c77ddfc-d9zgc |
AddedInterface |
Add eth0 [10.128.0.238/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
kubelet |
ironic-inspector-016b-account-create-update-v8zdc |
Created |
Created container: mariadb-account-create-update | |
openstack |
cert-manager-certificaterequests-approver |
ironic-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
metallb-speaker |
cinder-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
deployment-controller |
placement |
ScalingReplicaSet |
Scaled up replica set placement-5fd74d8d4b to 1 | |
openstack |
kubelet |
ironic-7b6b8d45d-l4pv4 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" | |
openstack |
kubelet |
ironic-inspector-016b-account-create-update-v8zdc |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
dnsmasq-dns-6b9c77ddfc-d9zgc |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-6b9c77ddfc-d9zgc |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-6b9c77ddfc-d9zgc |
Started |
Started container init | |
openstack |
multus |
ironic-conductor-0 |
AddedInterface |
Add ironic [172.20.1.31/24] from openstack/ironic | |
openstack |
multus |
ironic-conductor-0 |
AddedInterface |
Add eth0 [10.128.0.241/23] from ovn-kubernetes | |
openstack |
replicaset-controller |
placement-5fd74d8d4b |
SuccessfulCreate |
Created pod: placement-5fd74d8d4b-qd7wh | |
openstack |
cert-manager-certificates-issuing |
ironic-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-trigger |
ironic-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
dnsmasq-dns-6b9c77ddfc-d9zgc |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
multus |
placement-5fd74d8d4b-qd7wh |
AddedInterface |
Add eth0 [10.128.0.242/23] from ovn-kubernetes | |
openstack |
kubelet |
placement-5fd74d8d4b-qd7wh |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" already present on machine | |
openstack |
kubelet |
ironic-neutron-agent-88dd96889-vwkh6 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:ae2235391072c57f6d1b73edb0ee681884583d13b4493841e9d8e46fe4768320" in 2.993s (2.993s including waiting). Image size: 654754132 bytes. | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf" already present on machine | |
openstack |
cert-manager-certificates-issuing |
ironic-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
ironic-public-svc |
Requested |
Created new CertificateRequest resource "ironic-public-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
ironic-public-svc |
Generated |
Stored new private key in temporary Secret resource "ironic-public-svc-5dttr" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
ironic-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
ironic-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
placement-5fd74d8d4b-qd7wh |
Started |
Started container placement-api | |
openstack |
cert-manager-certificates-key-manager |
ironic-public-route |
Generated |
Stored new private key in temporary Secret resource "ironic-public-route-9jc4z" | |
openstack |
job-controller |
ironic-inspector-db-create |
Completed |
Job completed | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
ironic-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
placement-5fd74d8d4b-qd7wh |
Started |
Started container placement-log | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
placement-5fd74d8d4b-qd7wh |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" already present on machine | |
openstack |
deployment-controller |
ironic |
ScalingReplicaSet |
Scaled up replica set ironic-566cf67fc4 to 1 | |
openstack |
kubelet |
placement-5fd74d8d4b-qd7wh |
Created |
Created container: placement-api | |
openstack |
cert-manager-certificates-issuing |
ironic-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
ironic-public-route |
Requested |
Created new CertificateRequest resource "ironic-public-route-1" | |
openstack |
replicaset-controller |
ironic-566cf67fc4 |
SuccessfulCreate |
Created pod: ironic-566cf67fc4-2bm2p | |
openstack |
kubelet |
placement-5fd74d8d4b-qd7wh |
Created |
Created container: placement-log | |
openstack |
kubelet |
dnsmasq-dns-6b9c77ddfc-d9zgc |
Created |
Created container: dnsmasq-dns | |
openstack |
kubelet |
dnsmasq-dns-6b9c77ddfc-d9zgc |
Started |
Started container dnsmasq-dns | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container init | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: init | |
openstack |
job-controller |
ironic-inspector-016b-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
ironic-7b6b8d45d-l4pv4 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" in 4.121s (4.121s including waiting). Image size: 535909152 bytes. | |
openstack |
kubelet |
ironic-7b6b8d45d-l4pv4 |
Started |
Started container init | |
openstack |
multus |
ironic-566cf67fc4-2bm2p |
AddedInterface |
Add eth0 [10.128.0.243/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-566cf67fc4-2bm2p |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" already present on machine | |
openstack |
kubelet |
ironic-566cf67fc4-2bm2p |
Created |
Created container: init | |
openstack |
kubelet |
ironic-566cf67fc4-2bm2p |
Started |
Started container init | |
openstack |
kubelet |
ironic-7b6b8d45d-l4pv4 |
Created |
Created container: init | |
openstack |
kubelet |
ironic-7b6b8d45d-l4pv4 |
Created |
Created container: ironic-api-log | |
openstack |
kubelet |
ironic-566cf67fc4-2bm2p |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" already present on machine | |
openstack |
kubelet |
ironic-566cf67fc4-2bm2p |
Created |
Created container: ironic-api | |
openstack |
kubelet |
ironic-7b6b8d45d-l4pv4 |
Started |
Started container ironic-api-log | |
openstack |
kubelet |
ironic-566cf67fc4-2bm2p |
Created |
Created container: ironic-api-log | |
openstack |
kubelet |
ironic-7b6b8d45d-l4pv4 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" already present on machine | |
openstack |
kubelet |
ironic-566cf67fc4-2bm2p |
Started |
Started container ironic-api-log | |
openstack |
kubelet |
ironic-566cf67fc4-2bm2p |
Started |
Started container ironic-api | |
openstack |
kubelet |
ironic-566cf67fc4-2bm2p |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" already present on machine | |
openstack |
kubelet |
ironic-conductor-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:4527428e1352822052893ac7d017dee4d225eb1fe63635644aceec4d514b6df0" | |
| (x2) | openstack |
kubelet |
ironic-7b6b8d45d-l4pv4 |
Started |
Started container ironic-api |
openstack |
metallb-speaker |
keystone-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
replicaset-controller |
dnsmasq-dns-d687b68b9 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-d687b68b9-7r7fm | |
| (x2) | openstack |
kubelet |
ironic-7b6b8d45d-l4pv4 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" already present on machine |
| (x2) | openstack |
kubelet |
ironic-7b6b8d45d-l4pv4 |
Created |
Created container: ironic-api |
openstack |
kubelet |
dnsmasq-dns-d687b68b9-7r7fm |
Killing |
Stopping container dnsmasq-dns | |
| (x2) | openstack |
kubelet |
ironic-7b6b8d45d-l4pv4 |
BackOff |
Back-off restarting failed container ironic-api in pod ironic-7b6b8d45d-l4pv4_openstack(5fe78f22-b268-44d3-8be8-d305135ed9ca) |
openstack |
deployment-controller |
ironic |
ScalingReplicaSet |
Scaled down replica set ironic-7b6b8d45d to 0 from 1 | |
openstack |
replicaset-controller |
ironic-7b6b8d45d |
SuccessfulDelete |
Deleted pod: ironic-7b6b8d45d-l4pv4 | |
openstack |
kubelet |
ironic-7b6b8d45d-l4pv4 |
Killing |
Stopping container ironic-api-log | |
openstack |
kubelet |
openstackclient |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:e1e8f9b33b9cbd07e1c9984d894a3237e9469672fb9b346889a34ba3276298e4" | |
openstack |
job-controller |
ironic-inspector-db-sync |
SuccessfulCreate |
Created pod: ironic-inspector-db-sync-x86bq | |
openstack |
multus |
openstackclient |
AddedInterface |
Add eth0 [10.128.0.244/23] from ovn-kubernetes | |
| (x3) | openstack |
metallb-speaker |
ironic-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
kubelet |
ironic-inspector-db-sync-x86bq |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" | |
openstack |
multus |
ironic-inspector-db-sync-x86bq |
AddedInterface |
Add eth0 [10.128.0.245/23] from ovn-kubernetes | |
| (x2) | openstack |
kubelet |
ironic-neutron-agent-88dd96889-vwkh6 |
BackOff |
Back-off restarting failed container ironic-neutron-agent in pod ironic-neutron-agent-88dd96889-vwkh6_openstack(ea8f52d0-e4bb-4457-b7f7-33133e152096) |
openstack |
job-controller |
nova-cell1-db-create |
SuccessfulCreate |
Created pod: nova-cell1-db-create-69tfm | |
openstack |
job-controller |
nova-cell0-db-create |
SuccessfulCreate |
Created pod: nova-cell0-db-create-pbs2f | |
openstack |
deployment-controller |
swift-proxy |
ScalingReplicaSet |
Scaled up replica set swift-proxy-67bfcfbcf8 to 1 | |
openstack |
job-controller |
nova-api-87e5-account-create-update |
SuccessfulCreate |
Created pod: nova-api-87e5-account-create-update-45dj5 | |
openstack |
job-controller |
nova-cell0-5cd4-account-create-update |
SuccessfulCreate |
Created pod: nova-cell0-5cd4-account-create-update-hwzx4 | |
openstack |
job-controller |
nova-api-db-create |
SuccessfulCreate |
Created pod: nova-api-db-create-4lmzn | |
openstack |
replicaset-controller |
swift-proxy-67bfcfbcf8 |
SuccessfulCreate |
Created pod: swift-proxy-67bfcfbcf8-m9tkq | |
openstack |
multus |
nova-api-db-create-4lmzn |
AddedInterface |
Add eth0 [10.128.0.246/23] from ovn-kubernetes | |
openstack |
multus |
nova-api-87e5-account-create-update-45dj5 |
AddedInterface |
Add eth0 [10.128.0.249/23] from ovn-kubernetes | |
openstack |
multus |
nova-cell0-5cd4-account-create-update-hwzx4 |
AddedInterface |
Add eth0 [10.128.0.251/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-db-create-4lmzn |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
nova-api-db-create-4lmzn |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
multus |
nova-cell0-db-create-pbs2f |
AddedInterface |
Add eth0 [10.128.0.247/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-db-create-4lmzn |
Created |
Created container: mariadb-database-create | |
openstack |
multus |
swift-proxy-67bfcfbcf8-m9tkq |
AddedInterface |
Add eth0 [10.128.0.248/23] from ovn-kubernetes | |
openstack |
job-controller |
nova-cell1-f7f8-account-create-update |
SuccessfulCreate |
Created pod: nova-cell1-f7f8-account-create-update-2x5s2 | |
openstack |
multus |
nova-cell1-db-create-69tfm |
AddedInterface |
Add eth0 [10.128.0.250/23] from ovn-kubernetes | |
openstack |
multus |
nova-cell1-f7f8-account-create-update-2x5s2 |
AddedInterface |
Add eth0 [10.128.0.252/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell0-5cd4-account-create-update-hwzx4 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
nova-api-87e5-account-create-update-45dj5 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
nova-cell1-db-create-69tfm |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
swift-proxy-67bfcfbcf8-m9tkq |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:32aab2bf162442b5c6bbb3716fbdb0ec53cb67d6b0e7f018766b29cd8cb8692d" already present on machine | |
openstack |
kubelet |
nova-cell0-db-create-pbs2f |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
nova-cell1-f7f8-account-create-update-2x5s2 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine | |
openstack |
kubelet |
nova-api-87e5-account-create-update-45dj5 |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
nova-api-87e5-account-create-update-45dj5 |
Started |
Started container mariadb-account-create-update | |
| (x2) | openstack |
kubelet |
ironic-neutron-agent-88dd96889-vwkh6 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:ae2235391072c57f6d1b73edb0ee681884583d13b4493841e9d8e46fe4768320" already present on machine |
openstack |
kubelet |
ironic-inspector-db-sync-x86bq |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" in 22.682s (22.682s including waiting). Image size: 539211350 bytes. | |
| (x3) | openstack |
kubelet |
ironic-neutron-agent-88dd96889-vwkh6 |
Started |
Started container ironic-neutron-agent |
openstack |
kubelet |
ironic-inspector-db-sync-x86bq |
Created |
Created container: ironic-inspector-db-sync | |
openstack |
kubelet |
nova-cell0-db-create-pbs2f |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
nova-cell0-db-create-pbs2f |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
nova-cell0-5cd4-account-create-update-hwzx4 |
Started |
Started container mariadb-account-create-update | |
openstack |
kubelet |
openstackclient |
Started |
Started container openstackclient | |
openstack |
kubelet |
openstackclient |
Created |
Created container: openstackclient | |
openstack |
kubelet |
openstackclient |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:e1e8f9b33b9cbd07e1c9984d894a3237e9469672fb9b346889a34ba3276298e4" in 23.694s (23.694s including waiting). Image size: 594039150 bytes. | |
openstack |
kubelet |
nova-cell0-5cd4-account-create-update-hwzx4 |
Created |
Created container: mariadb-account-create-update | |
openstack |
kubelet |
neutron-5c5cd8d-bjbtl |
Killing |
Stopping container neutron-api | |
openstack |
kubelet |
neutron-5c5cd8d-bjbtl |
Killing |
Stopping container neutron-httpd | |
openstack |
kubelet |
nova-cell1-f7f8-account-create-update-2x5s2 |
Created |
Created container: mariadb-account-create-update | |
openstack |
replicaset-controller |
neutron-5c5cd8d |
SuccessfulDelete |
Deleted pod: neutron-5c5cd8d-bjbtl | |
openstack |
kubelet |
nova-cell1-f7f8-account-create-update-2x5s2 |
Started |
Started container mariadb-account-create-update | |
openstack |
deployment-controller |
neutron |
ScalingReplicaSet |
Scaled down replica set neutron-5c5cd8d to 0 from 1 | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:4527428e1352822052893ac7d017dee4d225eb1fe63635644aceec4d514b6df0" in 28.889s (28.889s including waiting). Image size: 770569006 bytes. | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: ironic-python-agent-init | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container ironic-python-agent-init | |
openstack |
kubelet |
swift-proxy-67bfcfbcf8-m9tkq |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:32aab2bf162442b5c6bbb3716fbdb0ec53cb67d6b0e7f018766b29cd8cb8692d" already present on machine | |
openstack |
kubelet |
swift-proxy-67bfcfbcf8-m9tkq |
Started |
Started container proxy-httpd | |
openstack |
kubelet |
swift-proxy-67bfcfbcf8-m9tkq |
Created |
Created container: proxy-httpd | |
| (x3) | openstack |
kubelet |
ironic-neutron-agent-88dd96889-vwkh6 |
Created |
Created container: ironic-neutron-agent |
openstack |
kubelet |
nova-cell1-db-create-69tfm |
Created |
Created container: mariadb-database-create | |
openstack |
kubelet |
nova-cell1-db-create-69tfm |
Started |
Started container mariadb-database-create | |
openstack |
kubelet |
ironic-inspector-db-sync-x86bq |
Started |
Started container ironic-inspector-db-sync | |
openstack |
job-controller |
nova-api-db-create |
Completed |
Job completed | |
openstack |
replicaset-controller |
placement-5b57c6d9b6 |
SuccessfulDelete |
Deleted pod: placement-5b57c6d9b6-frt4v | |
openstack |
deployment-controller |
placement |
ScalingReplicaSet |
Scaled down replica set placement-5b57c6d9b6 to 0 from 1 | |
openstack |
kubelet |
placement-5b57c6d9b6-frt4v |
Killing |
Stopping container placement-log | |
openstack |
kubelet |
placement-5b57c6d9b6-frt4v |
Killing |
Stopping container placement-api | |
openstack |
kubelet |
swift-proxy-67bfcfbcf8-m9tkq |
Started |
Started container proxy-server | |
openstack |
kubelet |
swift-proxy-67bfcfbcf8-m9tkq |
Created |
Created container: proxy-server | |
| (x2) | openstack |
statefulset-controller |
glance-7b9c2-default-external-api |
SuccessfulDelete |
delete Pod glance-7b9c2-default-external-api-0 in StatefulSet glance-7b9c2-default-external-api successful |
openstack |
job-controller |
nova-api-87e5-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
glance-7b9c2-default-external-api-0 |
Killing |
Stopping container glance-log | |
openstack |
kubelet |
glance-7b9c2-default-external-api-0 |
Killing |
Stopping container glance-httpd | |
openstack |
job-controller |
nova-cell0-5cd4-account-create-update |
Completed |
Job completed | |
openstack |
job-controller |
nova-cell1-db-create |
Completed |
Job completed | |
openstack |
job-controller |
nova-cell1-f7f8-account-create-update |
Completed |
Job completed | |
openstack |
kubelet |
ironic-conductor-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" | |
| (x5) | openstack |
metallb-speaker |
placement-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
job-controller |
nova-cell0-db-create |
Completed |
Job completed | |
openstack |
job-controller |
ironic-inspector-db-sync |
Completed |
Job completed | |
openstack |
kubelet |
glance-7b9c2-default-internal-api-0 |
Killing |
Stopping container glance-httpd | |
openstack |
kubelet |
glance-7b9c2-default-internal-api-0 |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.218:9292/healthcheck": EOF | |
openstack |
kubelet |
glance-7b9c2-default-internal-api-0 |
Killing |
Stopping container glance-log | |
openstack |
kubelet |
glance-7b9c2-default-internal-api-0 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.218:9292/healthcheck": EOF | |
openstack |
kubelet |
glance-7b9c2-default-internal-api-0 |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.218:9292/healthcheck": EOF | |
| (x2) | openstack |
statefulset-controller |
glance-7b9c2-default-internal-api |
SuccessfulDelete |
delete Pod glance-7b9c2-default-internal-api-0 in StatefulSet glance-7b9c2-default-internal-api successful |
openstack |
metallb-controller |
ironic-inspector-internal |
IPAllocated |
Assigned IP ["192.168.122.80"] | |
| (x3) | openstack |
statefulset-controller |
glance-7b9c2-default-external-api |
SuccessfulCreate |
create Pod glance-7b9c2-default-external-api-0 in StatefulSet glance-7b9c2-default-external-api successful |
| (x2) | openstack |
metallb-controller |
ironic-inspector-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
ironic-inspector-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
| (x2) | openstack |
metallb-controller |
ironic-inspector-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
metallb-speaker |
swift-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" | |
openstack |
replicaset-controller |
dnsmasq-dns-5f4c4c4d6c |
SuccessfulCreate |
Created pod: dnsmasq-dns-5f4c4c4d6c-fsk8m | |
openstack |
cert-manager-certificates-trigger |
ironic-inspector-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" in 6.845s (6.845s including waiting). Image size: 656726785 bytes. | |
openstack |
job-controller |
nova-cell0-conductor-db-sync |
SuccessfulCreate |
Created pod: nova-cell0-conductor-db-sync-8gbxf | |
| (x4) | openstack |
metallb-speaker |
neutron-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
cert-manager-certificates-key-manager |
ironic-inspector-internal-svc |
Generated |
Stored new private key in temporary Secret resource "ironic-inspector-internal-svc-5bmqh" | |
openstack |
cert-manager-certificates-request-manager |
ironic-inspector-internal-svc |
Requested |
Created new CertificateRequest resource "ironic-inspector-internal-svc-1" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
dnsmasq-dns-5f4c4c4d6c-fsk8m |
AddedInterface |
Add eth0 [10.128.0.254/23] from ovn-kubernetes | |
openstack |
multus |
nova-cell0-conductor-db-sync-8gbxf |
AddedInterface |
Add eth0 [10.128.1.0/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
ironic-inspector-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: pxe-init | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
ironic-inspector-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container pxe-init | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
| (x3) | openstack |
statefulset-controller |
glance-7b9c2-default-internal-api |
SuccessfulCreate |
create Pod glance-7b9c2-default-internal-api-0 in StatefulSet glance-7b9c2-default-internal-api successful |
openstack |
kubelet |
dnsmasq-dns-5f4c4c4d6c-fsk8m |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
nova-cell0-conductor-db-sync-8gbxf |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:4527428e1352822052893ac7d017dee4d225eb1fe63635644aceec4d514b6df0" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-inspector-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
ironic-inspector-0 |
AddedInterface |
Add eth0 [10.128.0.255/23] from ovn-kubernetes | |
openstack |
multus |
ironic-inspector-0 |
AddedInterface |
Add ironic [172.20.1.32/24] from openstack/ironic | |
openstack |
multus |
glance-7b9c2-default-external-api-0 |
AddedInterface |
Add storage [172.18.0.30/24] from openstack/storage | |
openstack |
kubelet |
glance-7b9c2-default-external-api-0 |
Created |
Created container: glance-log | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-python-agent-init | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-5f4c4c4d6c-fsk8m |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-5f4c4c4d6c-fsk8m |
Started |
Started container init | |
openstack |
kubelet |
dnsmasq-dns-5f4c4c4d6c-fsk8m |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
multus |
glance-7b9c2-default-external-api-0 |
AddedInterface |
Add eth0 [10.128.0.253/23] from ovn-kubernetes | |
openstack |
kubelet |
glance-7b9c2-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-python-agent-init | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
glance-7b9c2-default-external-api-0 |
Started |
Started container glance-httpd | |
openstack |
cert-manager-certificaterequests-approver |
ironic-inspector-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
ironic-inspector-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
ironic-inspector-public-svc |
Generated |
Stored new private key in temporary Secret resource "ironic-inspector-public-svc-fntz4" | |
openstack |
cert-manager-certificates-request-manager |
ironic-inspector-public-svc |
Requested |
Created new CertificateRequest resource "ironic-inspector-public-svc-1" | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: inspector-pxe-init | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container inspector-pxe-init | |
openstack |
kubelet |
dnsmasq-dns-5f4c4c4d6c-fsk8m |
Created |
Created container: dnsmasq-dns | |
openstack |
multus |
glance-7b9c2-default-internal-api-0 |
AddedInterface |
Add eth0 [10.128.1.1/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
glance-7b9c2-default-external-api-0 |
Created |
Created container: glance-httpd | |
openstack |
kubelet |
glance-7b9c2-default-external-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
kubelet |
glance-7b9c2-default-external-api-0 |
Started |
Started container glance-log | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-inspector-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-5f4c4c4d6c-fsk8m |
Started |
Started container dnsmasq-dns | |
openstack |
multus |
glance-7b9c2-default-internal-api-0 |
AddedInterface |
Add storage [172.18.0.31/24] from openstack/storage | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine | |
openstack |
kubelet |
glance-7b9c2-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
cert-manager-certificates-issuing |
ironic-inspector-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
glance-7b9c2-default-internal-api-0 |
Created |
Created container: glance-log | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-inspector-httpd | |
openstack |
kubelet |
glance-7b9c2-default-internal-api-0 |
Started |
Started container glance-log | |
openstack |
kubelet |
glance-7b9c2-default-internal-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-inspector-httpd | |
openstack |
kubelet |
glance-7b9c2-default-internal-api-0 |
Created |
Created container: glance-httpd | |
openstack |
kubelet |
glance-7b9c2-default-internal-api-0 |
Started |
Started container glance-httpd | |
openstack |
cert-manager-certificates-trigger |
ironic-inspector-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
replicaset-controller |
dnsmasq-dns-6b9c77ddfc |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-6b9c77ddfc-d9zgc | |
openstack |
kubelet |
dnsmasq-dns-6b9c77ddfc-d9zgc |
Killing |
Stopping container dnsmasq-dns | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-inspector | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-inspector | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" already present on machine | |
openstack |
cert-manager-certificates-key-manager |
ironic-inspector-public-route |
Generated |
Stored new private key in temporary Secret resource "ironic-inspector-public-route-cs9wb" | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: inspector-httpboot | |
openstack |
cert-manager-certificaterequests-approver |
ironic-inspector-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-request-manager |
ironic-inspector-public-route |
Requested |
Created new CertificateRequest resource "ironic-inspector-public-route-1" | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container inspector-httpboot | |
openstack |
kubelet |
nova-cell0-conductor-db-sync-8gbxf |
Started |
Started container nova-cell0-conductor-db-sync | |
openstack |
kubelet |
nova-cell0-conductor-db-sync-8gbxf |
Created |
Created container: nova-cell0-conductor-db-sync | |
openstack |
kubelet |
nova-cell0-conductor-db-sync-8gbxf |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" in 12.716s (12.716s including waiting). Image size: 667570153 bytes. | |
openstack |
cert-manager-certificaterequests-issuer-ca |
ironic-inspector-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine | |
openstack |
cert-manager-certificates-issuing |
ironic-inspector-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ramdisk-logs | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ramdisk-logs | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: inspector-dnsmasq | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container inspector-dnsmasq | |
| (x3) | openstack |
metallb-speaker |
glance-default-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
openstack |
kubelet |
ironic-inspector-0 |
Killing |
Stopping container inspector-dnsmasq | |
openstack |
kubelet |
ironic-inspector-0 |
Killing |
Stopping container ironic-inspector-httpd | |
openstack |
kubelet |
ironic-inspector-0 |
Killing |
Stopping container inspector-httpboot | |
openstack |
statefulset-controller |
ironic-inspector |
SuccessfulDelete |
delete Pod ironic-inspector-0 in StatefulSet ironic-inspector successful | |
openstack |
kubelet |
ironic-inspector-0 |
Killing |
Stopping container ironic-inspector | |
openstack |
kubelet |
neutron-5c5cd8d-bjbtl |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.231:9696/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
| (x2) | openstack |
statefulset-controller |
ironic-inspector |
SuccessfulCreate |
create Pod ironic-inspector-0 in StatefulSet ironic-inspector successful |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-python-agent-init | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-python-agent-init | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:4527428e1352822052893ac7d017dee4d225eb1fe63635644aceec4d514b6df0" already present on machine | |
openstack |
multus |
ironic-inspector-0 |
AddedInterface |
Add ironic [172.20.1.32/24] from openstack/ironic | |
openstack |
multus |
ironic-inspector-0 |
AddedInterface |
Add eth0 [10.128.1.2/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: inspector-pxe-init | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container inspector-pxe-init | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-inspector-httpd | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-inspector-httpd | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ironic-inspector | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container inspector-httpboot | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: inspector-httpboot | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" already present on machine | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ironic-inspector | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: inspector-dnsmasq | |
openstack |
kubelet |
ironic-inspector-0 |
Created |
Created container: ramdisk-logs | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container ramdisk-logs | |
openstack |
kubelet |
ironic-inspector-0 |
Started |
Started container inspector-dnsmasq | |
openstack |
kubelet |
ironic-inspector-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine | |
openstack |
job-controller |
nova-cell0-conductor-db-sync |
Completed |
Job completed | |
openstack |
statefulset-controller |
nova-cell0-conductor |
SuccessfulCreate |
create Pod nova-cell0-conductor-0 in StatefulSet nova-cell0-conductor successful | |
openstack |
multus |
nova-cell0-conductor-0 |
AddedInterface |
Add eth0 [10.128.1.3/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell0-conductor-0 |
Started |
Started container nova-cell0-conductor-conductor | |
openstack |
kubelet |
nova-cell0-conductor-0 |
Created |
Created container: nova-cell0-conductor-conductor | |
openstack |
kubelet |
nova-cell0-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine | |
| (x3) | openstack |
metallb-speaker |
ironic-inspector-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
| (x2) | openstack |
metallb-controller |
nova-metadata-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf" already present on machine | |
| (x2) | openstack |
metallb-controller |
nova-metadata-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
| (x2) | openstack |
metallb-controller |
nova-metadata-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
metallb-controller |
nova-metadata-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
statefulset-controller |
nova-cell1-compute-ironic-compute |
SuccessfulCreate |
create Pod nova-cell1-compute-ironic-compute-0 in StatefulSet nova-cell1-compute-ironic-compute successful | |
openstack |
job-controller |
nova-cell0-cell-mapping |
SuccessfulCreate |
Created pod: nova-cell0-cell-mapping-9btmx | |
openstack |
cert-manager-certificates-trigger |
nova-metadata-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
nova-metadata-internal-svc |
Generated |
Stored new private key in temporary Secret resource "nova-metadata-internal-svc-49wtz" | |
openstack |
multus |
nova-cell0-cell-mapping-9btmx |
AddedInterface |
Add eth0 [10.128.1.4/23] from ovn-kubernetes | |
openstack |
replicaset-controller |
dnsmasq-dns-78d5d45447 |
SuccessfulCreate |
Created pod: dnsmasq-dns-78d5d45447-bfqg5 | |
openstack |
kubelet |
nova-cell0-cell-mapping-9btmx |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine | |
openstack |
job-controller |
nova-cell1-conductor-db-sync |
SuccessfulCreate |
Created pod: nova-cell1-conductor-db-sync-4vxwz | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: ironic-conductor | |
openstack |
cert-manager-certificates-issuing |
nova-metadata-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
multus |
nova-cell1-compute-ironic-compute-0 |
AddedInterface |
Add eth0 [10.128.1.5/23] from ovn-kubernetes | |
openstack |
cert-manager-certificates-request-manager |
nova-metadata-internal-svc |
Requested |
Created new CertificateRequest resource "nova-metadata-internal-svc-1" | |
openstack |
kubelet |
nova-cell1-compute-ironic-compute-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83" | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-metadata-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
nova-metadata-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container ironic-conductor | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-metadata-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
kubelet |
nova-cell0-cell-mapping-9btmx |
Created |
Created container: nova-manage | |
openstack |
multus |
nova-scheduler-0 |
AddedInterface |
Add eth0 [10.128.1.8/23] from ovn-kubernetes | |
openstack |
multus |
dnsmasq-dns-78d5d45447-bfqg5 |
AddedInterface |
Add eth0 [10.128.1.10/23] from ovn-kubernetes | |
openstack |
multus |
nova-cell1-conductor-db-sync-4vxwz |
AddedInterface |
Add eth0 [10.128.1.11/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-conductor-db-sync-4vxwz |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine | |
openstack |
kubelet |
nova-cell0-cell-mapping-9btmx |
Started |
Started container nova-manage | |
openstack |
kubelet |
nova-api-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" | |
openstack |
multus |
nova-metadata-0 |
AddedInterface |
Add eth0 [10.128.1.9/23] from ovn-kubernetes | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: httpboot | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container httpboot | |
openstack |
kubelet |
nova-metadata-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" | |
openstack |
kubelet |
dnsmasq-dns-78d5d45447-bfqg5 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
ironic-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" already present on machine | |
openstack |
cert-manager-certificates-trigger |
nova-novncproxy-cell1-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
nova-scheduler-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a0c36a1cc7545947c2910ca4cb75420dc628cacd8c103f3a630b3ed9c8e4dcda" | |
openstack |
cert-manager-certificates-request-manager |
nova-novncproxy-cell1-public-svc |
Requested |
Created new CertificateRequest resource "nova-novncproxy-cell1-public-svc-1" | |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.1.6/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Pulling |
Pulling image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:f85de2d4d8b8a3b325586ba40ba12cc9a763e534589b6f1e550f41e3aee4eda1" | |
openstack |
multus |
nova-cell1-novncproxy-0 |
AddedInterface |
Add eth0 [10.128.1.7/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-novncproxy-cell1-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
nova-novncproxy-cell1-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-trigger |
nova-novncproxy-cell1-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
nova-novncproxy-cell1-public-svc |
Generated |
Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-svc-gr4cg" | |
openstack |
cert-manager-certificates-issuing |
nova-novncproxy-cell1-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
nova-novncproxy-cell1-public-route |
Requested |
Created new CertificateRequest resource "nova-novncproxy-cell1-public-route-1" | |
openstack |
kubelet |
ironic-conductor-0 |
Created |
Created container: dnsmasq | |
openstack |
kubelet |
ironic-conductor-0 |
Started |
Started container dnsmasq | |
openstack |
kubelet |
nova-cell1-conductor-db-sync-4vxwz |
Created |
Created container: nova-cell1-conductor-db-sync | |
openstack |
kubelet |
nova-cell1-conductor-db-sync-4vxwz |
Started |
Started container nova-cell1-conductor-db-sync | |
openstack |
kubelet |
dnsmasq-dns-78d5d45447-bfqg5 |
Created |
Created container: init | |
openstack |
kubelet |
dnsmasq-dns-78d5d45447-bfqg5 |
Started |
Started container init | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
nova-novncproxy-cell1-vencrypt |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificates-key-manager |
nova-novncproxy-cell1-public-route |
Generated |
Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-route-mxfg4" | |
openstack |
kubelet |
dnsmasq-dns-78d5d45447-bfqg5 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-novncproxy-cell1-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
nova-novncproxy-cell1-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificates-issuing |
nova-novncproxy-cell1-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-key-manager |
nova-novncproxy-cell1-vencrypt |
Generated |
Stored new private key in temporary Secret resource "nova-novncproxy-cell1-vencrypt-src45" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-78d5d45447-bfqg5 |
Started |
Started container dnsmasq-dns | |
openstack |
cert-manager-certificates-issuing |
nova-novncproxy-cell1-vencrypt |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-78d5d45447-bfqg5 |
Created |
Created container: dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
statefulset-controller |
nova-cell1-novncproxy |
SuccessfulDelete |
delete Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful | |
openstack |
cert-manager-certificates-request-manager |
nova-novncproxy-cell1-vencrypt |
Requested |
Created new CertificateRequest resource "nova-novncproxy-cell1-vencrypt-1" | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-novncproxy-cell1-vencrypt-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
nova-novncproxy-cell1-vencrypt-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-novncproxy-cell1-vencrypt-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
nova-scheduler-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a0c36a1cc7545947c2910ca4cb75420dc628cacd8c103f3a630b3ed9c8e4dcda" in 3.111s (3.111s including waiting). Image size: 667570155 bytes. | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:f85de2d4d8b8a3b325586ba40ba12cc9a763e534589b6f1e550f41e3aee4eda1" in 3.131s (3.132s including waiting). Image size: 669942770 bytes. | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" in 3.105s (3.105s including waiting). Image size: 684375271 bytes. | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" in 3.476s (3.476s including waiting). Image size: 684375271 bytes. | |
openstack |
kubelet |
nova-scheduler-0 |
Started |
Started container nova-scheduler-scheduler | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Killing |
Stopping container nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-metadata | |
openstack |
kubelet |
nova-scheduler-0 |
Created |
Created container: nova-scheduler-scheduler | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-log | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Created |
Created container: nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Started |
Started container nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-log | |
openstack |
multus |
nova-metadata-0 |
AddedInterface |
Add eth0 [10.128.1.12/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-log | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.1.6:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.1.6:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
dnsmasq-dns-5f4c4c4d6c-fsk8m |
Killing |
Stopping container dnsmasq-dns | |
openstack |
replicaset-controller |
dnsmasq-dns-5f4c4c4d6c |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-5f4c4c4d6c-fsk8m | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-metadata | |
openstack |
kubelet |
nova-cell1-compute-ironic-compute-0 |
Started |
Started container nova-cell1-compute-ironic-compute-compute | |
openstack |
kubelet |
nova-cell1-compute-ironic-compute-0 |
Created |
Created container: nova-cell1-compute-ironic-compute-compute | |
openstack |
kubelet |
nova-cell1-compute-ironic-compute-0 |
Pulled |
Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83" in 13.531s (13.531s including waiting). Image size: 1214548351 bytes. | |
openstack |
kubelet |
nova-scheduler-0 |
Killing |
Stopping container nova-scheduler-scheduler | |
openstack |
job-controller |
nova-cell0-cell-mapping |
Completed |
Job completed | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-log | |
openstack |
statefulset-controller |
nova-cell1-conductor |
SuccessfulCreate |
create Pod nova-cell1-conductor-0 in StatefulSet nova-cell1-conductor successful | |
openstack |
job-controller |
nova-cell1-conductor-db-sync |
Completed |
Job completed | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-log | |
openstack |
multus |
nova-cell1-conductor-0 |
AddedInterface |
Add eth0 [10.128.1.13/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-conductor-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine | |
openstack |
multus |
nova-scheduler-0 |
AddedInterface |
Add eth0 [10.128.1.15/23] from ovn-kubernetes | |
openstack |
multus |
nova-metadata-0 |
AddedInterface |
Add eth0 [10.128.1.14/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-conductor-0 |
Created |
Created container: nova-cell1-conductor-conductor | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-cell1-conductor-0 |
Started |
Started container nova-cell1-conductor-conductor | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-log | |
openstack |
kubelet |
nova-scheduler-0 |
Created |
Created container: nova-scheduler-scheduler | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a0c36a1cc7545947c2910ca4cb75420dc628cacd8c103f3a630b3ed9c8e4dcda" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-metadata | |
openstack |
kubelet |
nova-scheduler-0 |
Started |
Started container nova-scheduler-scheduler | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.1.16/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.14:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.14:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.1.16:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "http://10.128.1.16:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x2) | openstack |
statefulset-controller |
nova-cell1-novncproxy |
SuccessfulCreate |
create Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Started |
Started container nova-cell1-novncproxy-novncproxy | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:f85de2d4d8b8a3b325586ba40ba12cc9a763e534589b6f1e550f41e3aee4eda1" already present on machine | |
openstack |
kubelet |
nova-cell1-novncproxy-0 |
Created |
Created container: nova-cell1-novncproxy-novncproxy | |
openstack |
multus |
nova-cell1-novncproxy-0 |
AddedInterface |
Add eth0 [10.128.1.17/23] from ovn-kubernetes | |
| (x2) | openstack |
metallb-controller |
nova-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/address-pool |
| (x2) | openstack |
metallb-controller |
nova-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs |
openstack |
replicaset-controller |
dnsmasq-dns-8f95c8447 |
SuccessfulCreate |
Created pod: dnsmasq-dns-8f95c8447-f78pp | |
| (x2) | openstack |
metallb-controller |
nova-internal |
deprecatedAnnotation |
Service uses deprecated annotation metallb.universe.tf/allow-shared-ip |
openstack |
metallb-controller |
nova-internal |
IPAllocated |
Assigned IP ["172.17.0.80"] | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
nova-internal-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-8f95c8447-f78pp |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
cert-manager-certificaterequests-approver |
nova-internal-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-internal-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
multus |
dnsmasq-dns-8f95c8447-f78pp |
AddedInterface |
Add eth0 [10.128.1.18/23] from ovn-kubernetes | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-internal-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-issuing |
nova-internal-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
nova-internal-svc |
Requested |
Created new CertificateRequest resource "nova-internal-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
nova-internal-svc |
Generated |
Stored new private key in temporary Secret resource "nova-internal-svc-86qh5" | |
openstack |
kubelet |
dnsmasq-dns-8f95c8447-f78pp |
Started |
Started container init | |
openstack |
cert-manager-certificates-trigger |
nova-public-svc |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
kubelet |
dnsmasq-dns-8f95c8447-f78pp |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine | |
openstack |
kubelet |
dnsmasq-dns-8f95c8447-f78pp |
Created |
Created container: init | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
kubelet |
dnsmasq-dns-8f95c8447-f78pp |
Started |
Started container dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-route-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-approver |
nova-public-route-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-route-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificates-issuing |
nova-public-svc |
Issuing |
The certificate has been successfully issued | |
openstack |
cert-manager-certificates-request-manager |
nova-public-svc |
Requested |
Created new CertificateRequest resource "nova-public-svc-1" | |
openstack |
cert-manager-certificates-key-manager |
nova-public-svc |
Generated |
Stored new private key in temporary Secret resource "nova-public-svc-8hzjb" | |
openstack |
kubelet |
dnsmasq-dns-8f95c8447-f78pp |
Created |
Created container: dnsmasq-dns | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-svc-1 |
CertificateIssued |
Certificate fetched from issuer successfully | |
openstack |
cert-manager-certificaterequests-approver |
nova-public-svc-1 |
cert-manager.io |
Certificate request has been approved by cert-manager.io | |
openstack |
cert-manager-certificaterequests-issuer-venafi |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-vault |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-ca |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificaterequests-issuer-acme |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-trigger |
nova-public-route |
Issuing |
Issuing certificate as Secret does not exist | |
openstack |
cert-manager-certificaterequests-issuer-selfsigned |
nova-public-svc-1 |
WaitingForApproval |
Not signing CertificateRequest until it is Approved | |
openstack |
cert-manager-certificates-key-manager |
nova-public-route |
Generated |
Stored new private key in temporary Secret resource "nova-public-route-5x6sd" | |
openstack |
cert-manager-certificates-request-manager |
nova-public-route |
Requested |
Created new CertificateRequest resource "nova-public-route-1" | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-api | |
openstack |
cert-manager-certificates-issuing |
nova-public-route |
Issuing |
The certificate has been successfully issued | |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-log | |
openstack |
job-controller |
nova-cell1-cell-mapping |
SuccessfulCreate |
Created pod: nova-cell1-cell-mapping-5x59m | |
openstack |
job-controller |
nova-cell1-host-discover |
SuccessfulCreate |
Created pod: nova-cell1-host-discover-7vrrr | |
openstack |
multus |
nova-cell1-cell-mapping-5x59m |
AddedInterface |
Add eth0 [10.128.1.19/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-cell-mapping-5x59m |
Started |
Started container nova-manage | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-cell1-cell-mapping-5x59m |
Created |
Created container: nova-manage | |
openstack |
kubelet |
nova-cell1-cell-mapping-5x59m |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine | |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.1.21/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-host-discover-7vrrr |
Started |
Started container nova-manage | |
openstack |
kubelet |
nova-cell1-host-discover-7vrrr |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine | |
openstack |
multus |
nova-cell1-host-discover-7vrrr |
AddedInterface |
Add eth0 [10.128.1.20/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-cell1-host-discover-7vrrr |
Created |
Created container: nova-manage | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
kubelet |
dnsmasq-dns-78d5d45447-bfqg5 |
Killing |
Stopping container dnsmasq-dns | |
| (x22) | openstack |
deployment-controller |
dnsmasq-dns |
ScalingReplicaSet |
(combined from similar events): Scaled down replica set dnsmasq-dns-78d5d45447 to 0 from 1 |
openstack |
replicaset-controller |
dnsmasq-dns-78d5d45447 |
SuccessfulDelete |
Deleted pod: dnsmasq-dns-78d5d45447-bfqg5 | |
openstack |
job-controller |
nova-cell1-host-discover |
Completed |
Job completed | |
| (x12) | openstack |
rabbitmqcluster-controller |
rabbitmq-cell1 |
SuccessfulUpdate |
updated resource rabbitmq-cell1-nodes of Type *v1.Service |
| (x12) | openstack |
rabbitmqcluster-controller |
rabbitmq |
SuccessfulUpdate |
updated resource rabbitmq-nodes of Type *v1.Service |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-api | |
openstack |
job-controller |
nova-cell1-cell-mapping |
Completed |
Job completed | |
| (x3) | openstack |
statefulset-controller |
nova-api |
SuccessfulDelete |
delete Pod nova-api-0 in StatefulSet nova-api successful |
openstack |
kubelet |
nova-scheduler-0 |
Killing |
Stopping container nova-scheduler-scheduler | |
| (x3) | openstack |
statefulset-controller |
nova-metadata |
SuccessfulDelete |
delete Pod nova-metadata-0 in StatefulSet nova-metadata successful |
openstack |
kubelet |
nova-api-0 |
Killing |
Stopping container nova-api-log | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Killing |
Stopping container nova-metadata-metadata | |
| (x2) | openstack |
statefulset-controller |
nova-scheduler |
SuccessfulDelete |
delete Pod nova-scheduler-0 in StatefulSet nova-scheduler successful |
| (x4) | openstack |
statefulset-controller |
nova-api |
SuccessfulCreate |
create Pod nova-api-0 in StatefulSet nova-api successful |
openstack |
multus |
nova-scheduler-0 |
AddedInterface |
Add eth0 [10.128.1.22/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-scheduler-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a0c36a1cc7545947c2910ca4cb75420dc628cacd8c103f3a630b3ed9c8e4dcda" already present on machine | |
| (x3) | openstack |
statefulset-controller |
nova-scheduler |
SuccessfulCreate |
create Pod nova-scheduler-0 in StatefulSet nova-scheduler successful |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Readiness probe failed: Get "https://10.128.1.14:8775/": read tcp 10.128.0.2:51706->10.128.1.14:8775: read: connection reset by peer | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Readiness probe failed: Get "https://10.128.1.14:8775/": read tcp 10.128.0.2:51702->10.128.1.14:8775: read: connection reset by peer | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-api | |
openstack |
multus |
nova-api-0 |
AddedInterface |
Add eth0 [10.128.1.23/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-scheduler-0 |
Started |
Started container nova-scheduler-scheduler | |
openstack |
kubelet |
nova-scheduler-0 |
Created |
Created container: nova-scheduler-scheduler | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Started |
Started container nova-api-log | |
openstack |
kubelet |
nova-api-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-api-0 |
Created |
Created container: nova-api-api | |
| (x4) | openstack |
statefulset-controller |
nova-metadata |
SuccessfulCreate |
create Pod nova-metadata-0 in StatefulSet nova-metadata successful |
openstack |
multus |
nova-metadata-0 |
AddedInterface |
Add eth0 [10.128.1.24/23] from ovn-kubernetes | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-metadata | |
openstack |
kubelet |
nova-metadata-0 |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine | |
openstack |
kubelet |
nova-metadata-0 |
Started |
Started container nova-metadata-log | |
openstack |
kubelet |
nova-metadata-0 |
Created |
Created container: nova-metadata-log | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.23:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-api-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.23:8774/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.24:8775/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openstack |
kubelet |
nova-metadata-0 |
Unhealthy |
Startup probe failed: Get "https://10.128.1.24:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
| (x3) | openstack |
metallb-speaker |
nova-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
| (x3) | openstack |
metallb-speaker |
nova-metadata-internal |
nodeAssigned |
announcing from node "master-0" with protocol "layer2" |
sushy-emulator |
replicaset-controller |
sushy-emulator-58f4c9b998 |
SuccessfulDelete |
Deleted pod: sushy-emulator-58f4c9b998-jd8tg | |
sushy-emulator |
deployment-controller |
sushy-emulator |
ScalingReplicaSet |
Scaled down replica set sushy-emulator-58f4c9b998 to 0 from 1 | |
sushy-emulator |
replicaset-controller |
sushy-emulator-64488c485f |
SuccessfulCreate |
Created pod: sushy-emulator-64488c485f-5kt65 | |
sushy-emulator |
kubelet |
sushy-emulator-58f4c9b998-jd8tg |
Killing |
Stopping container sushy-emulator | |
sushy-emulator |
deployment-controller |
sushy-emulator |
ScalingReplicaSet |
Scaled up replica set sushy-emulator-64488c485f to 1 | |
sushy-emulator |
multus |
sushy-emulator-64488c485f-5kt65 |
AddedInterface |
Add eth0 [10.128.1.25/23] from ovn-kubernetes | |
sushy-emulator |
multus |
sushy-emulator-64488c485f-5kt65 |
AddedInterface |
Add ironic [172.20.1.71/24] from sushy-emulator/ironic | |
sushy-emulator |
kubelet |
sushy-emulator-64488c485f-5kt65 |
Pulled |
Container image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1761151453" already present on machine | |
sushy-emulator |
kubelet |
sushy-emulator-64488c485f-5kt65 |
Started |
Started container sushy-emulator | |
sushy-emulator |
kubelet |
sushy-emulator-64488c485f-5kt65 |
Created |
Created container: sushy-emulator | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29522400 |
SuccessfulCreate |
Created pod: collect-profiles-29522400-hgd4s | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29522400 | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29522400-hgd4s |
AddedInterface |
Add eth0 [10.128.1.26/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29522400-hgd4s |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29522400-hgd4s |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29522400-hgd4s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulDelete |
Deleted job collect-profiles-29522355 | |
| (x2) | openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29522400, condition: Complete |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29522400 |
Completed |
Job completed | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-kjh2v |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.17:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-588944557d-kjh2v |
ProbeError |
Readiness probe error: Get "https://10.128.0.17:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openstack |
cronjob-controller |
keystone-cron |
SuccessfulCreate |
Created job keystone-cron-29522401 | |
openstack |
job-controller |
keystone-cron-29522401 |
SuccessfulCreate |
Created pod: keystone-cron-29522401-79wwl | |
openstack |
kubelet |
keystone-cron-29522401-79wwl |
Created |
Created container: keystone-cron | |
openstack |
multus |
keystone-cron-29522401-79wwl |
AddedInterface |
Add eth0 [10.128.1.27/23] from ovn-kubernetes | |
openstack |
kubelet |
keystone-cron-29522401-79wwl |
Started |
Started container keystone-cron | |
openstack |
kubelet |
keystone-cron-29522401-79wwl |
Pulled |
Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" already present on machine | |
openstack |
job-controller |
keystone-cron-29522401 |
Completed |
Job completed | |
openstack |
cronjob-controller |
keystone-cron |
SawCompletedJob |
Saw completed job: keystone-cron-29522401, condition: Complete | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29522415 |
SuccessfulCreate |
Created pod: collect-profiles-29522415-dwsg2 | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-29522415 | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-29522415-dwsg2 |
AddedInterface |
Add eth0 [10.128.1.28/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29522415-dwsg2 |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29522415-dwsg2 |
Created |
Created container: collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-29522415-dwsg2 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-29522415 |
Completed |
Job completed | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulDelete |
Deleted job collect-profiles-29522370 | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-29522415, condition: Complete | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x40) | openshift-kube-apiserver |
kubelet |
installer-3-master-0 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-must-gather-vn6lm namespace |